WorldWideScience

Sample records for stress decompositions based

  1. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  2. Tensor decomposition-based unsupervised feature extraction identifies candidate genes that induce post-traumatic stress disorder-mediated heart diseases.

    Science.gov (United States)

    Taguchi, Y-H

    2017-12-21

    Although post-traumatic stress disorder (PTSD) is primarily a mental disorder, it can cause additional symptoms that do not seem to be directly related to the central nervous system, which PTSD is assumed to directly affect. PTSD-mediated heart diseases are some of such secondary disorders. In spite of the significant correlations between PTSD and heart diseases, spatial separation between the heart and brain (where PTSD is primarily active) prevents researchers from elucidating the mechanisms that bridge the two disorders. Our purpose was to identify genes linking PTSD and heart diseases. In this study, gene expression profiles of various murine tissues observed under various types of stress or without stress were analyzed in an integrated manner using tensor decomposition (TD). Based upon the obtained features, ∼ 400 genes were identified as candidate genes that may mediate heart diseases associated with PTSD. Various gene enrichment analyses supported biological reliability of the identified genes. Ten genes encoding protein-, DNA-, or mRNA-interacting proteins-ILF2, ILF3, ESR1, ESR2, RAD21, HTT, ATF2, NR3C1, TP53, and TP63-were found to be likely to regulate expression of most of these ∼ 400 genes and therefore are candidate primary genes that cause PTSD-mediated heart diseases. Approximately 400 genes in the heart were also found to be strongly affected by various drugs whose known adverse effects are related to heart diseases and/or fear memory conditioning; these data support the reliability of our findings. TD-based unsupervised feature extraction turned out to be a useful method for gene selection and successfully identified possible genes causing PTSD-mediated heart diseases.

  3. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  4. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  5. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  6. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  7. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    Science.gov (United States)

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.

  8. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  9. Application of the whole powder pattern decomposition procedure in the residual stress analysis of layers and coatings

    International Nuclear Information System (INIS)

    Schoderböck, Peter; Brechbühl, Jens

    2015-01-01

    The X-ray investigation of stress states in materials, based on the determination of elastic lattice strains which are converted to stresses by means of theory of elasticity, is a necessity in quality control of thin layers and coatings for optimizing manufacturing steps and process parameters. This work introduces the evaluation of residual stress from complex and overlapping diffraction patterns using a whole-powder pattern decomposition procedure defining a 2θ-offset caused by residual stresses. Furthermore corrections for sample displacement and refraction are directly implemented in the calculation procedure. The correlation matrices of the least square fitting routines have been analyzed for parameter interactions and obvious interdependencies have been decoupled by the introduction of an internal standard within the diffraction experiment. This decomposition based evaluation has been developed on tungsten as a model material system and its efficiency was demonstrated by X-ray diffraction analysis of a solid oxide fuel cell multilayer system. The results are compared with those obtained by the classical sin 2 Ψ-method. - Highlights: • Analysis of complex multiphase diffraction patterns with respect to residual stressStress-gradient determination with in situ correction of displacement and refraction • Consideration of the elastic anisotropy within the refinement

  10. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  11. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  12. Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    KAUST Repository

    Sukkari, Dalal

    2016-10-25

    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.

  13. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  14. A novel method for EMG decomposition based on matched filters

    Directory of Open Access Journals (Sweden)

    Ailton Luiz Dias Siqueira Júnior

    Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.

  15. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  16. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  17. Training for Retrieval of Knowledge under Stress through Algorithmic Decomposition

    Science.gov (United States)

    1986-10-01

    o1 Disussion of part A .............. 2 PART B: Training 4or overcoming the Biase-Rate Fallacy Introduction...22 PART B: Training for overcoming the Base-Rate Fallacy Introduction ........ .................... 23 Experiment III ......................... 28...Light Bulb and Dyslexia problems used by Lichtenstein & MacGregor (1985). The problems are presented in Appendix D. All aspects of the problems were

  18. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  19. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  20. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  1. Palm vein recognition based on directional empirical mode decomposition

    Science.gov (United States)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  2. Automatic classification of visual evoked potentials based on wavelet decomposition

    Science.gov (United States)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  3. Quantum game theory based on the Schmidt decomposition

    International Nuclear Information System (INIS)

    Ichikawa, Tsubasa; Tsutsui, Izumi; Cheon, Taksu

    2008-01-01

    We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree

  4. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  5. A domain decomposition approach for full-field measurements based identification of local elastic parameters

    KAUST Repository

    Lubineau, Gilles

    2015-03-01

    We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.

  6. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  7. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  8. Identifying key nodes in multilayer networks based on tensor decomposition.

    Science.gov (United States)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  9. AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    J. Fan

    2018-04-01

    Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  10. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    Science.gov (United States)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  11. Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael

    We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....

  12. QR-decomposition based SENSE reconstruction using parallel architecture.

    Science.gov (United States)

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. The thermal decomposition behavior of ammonium perchlorate and of an ammonium-perchlorate-based composite propellant

    Energy Technology Data Exchange (ETDEWEB)

    Behrens, R.; Minier, L.

    1998-03-24

    The thermal decomposition of ammonium perchlorate (AP) and ammonium-perchlorate-based composite propellants is studied using the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) technique. The main objective of the present work is to evaluate whether the STMBMS can provide new data on these materials that will have sufficient detail on the reaction mechanisms and associated reaction kinetics to permit creation of a detailed model of the thermal decomposition process. Such a model is a necessary ingredient to engineering models of ignition and slow-cookoff for these AP-based composite propellants. Results show that the decomposition of pure AP is controlled by two processes. One occurs at lower temperatures (240 to 270 C), produces mainly H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl, and is shown to occur in the solid phase within the AP particles. 200{micro} diameter AP particles undergo 25% decomposition in the solid phase, whereas 20{micro} diameter AP particles undergo only 13% decomposition. The second process is dissociative sublimation of AP to NH{sub 3} + HClO{sub 4} followed by the decomposition of, and reaction between, these two products in the gas phase. The dissociative sublimation process occurs over the entire temperature range of AP decomposition, but only becomes dominant at temperatures above those for the solid-phase decomposition. AP-based composite propellants are used extensively in both small tactical rocket motors and large strategic rocket systems.

  14. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  15. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  16. Mindfulness-Based Stress Reduction

    Science.gov (United States)

    ... R S T U V W X Y Z Mindfulness-Based Stress Reduction (MBSR) Information 6 Things You ... Disease and Dementia (12/20/13) Research Spotlights Mindfulness-Based Stress Reduction, Cognitive-Behavioral Therapy Shown To ...

  17. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....

  18. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  19. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  20. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  1. Ultra-precision machining induced phase decomposition at surface of Zn-Al based alloy

    International Nuclear Information System (INIS)

    To, S.; Zhu, Y.H.; Lee, W.B.

    2006-01-01

    The microstructural changes and phase transformation of an ultra-precision machined Zn-Al based alloy were examined using X-ray diffraction and back-scattered electron microscopy techniques. Decomposition of the Zn-rich η phase and the related changes in crystal orientation was detected at the surface of the ultra-precision machined alloy specimen. The effects of the machining parameters, such as cutting speed and depth of cut, on the phase decomposition were discussed in comparison with the tensile and rolling induced microstrucutural changes and phase decomposition

  2. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    Science.gov (United States)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  3. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    Science.gov (United States)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  4. Non-linear scalable TFETI domain decomposition based contact algorithm

    Czech Academy of Sciences Publication Activity Database

    Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.

    2010-01-01

    Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf

  5. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  6. Base catalyzed decomposition of toxic and hazardous chemicals

    International Nuclear Information System (INIS)

    Rogers, C.J.; Kornel, A.; Sparks, H.L.

    1991-01-01

    There are vast amounts of toxic and hazardous chemicals, which have pervaded our environment during the past fifty years, leaving us with serious, crucial problems of remediation and disposal. The accumulation of polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), ''dioxins'' and pesticides in soil sediments and living systems is a serious problem that is receiving considerable attention concerning the cancer-causing nature of these synthetic compounds.US EPA scientists developed in 1989 and 1990 two novel chemical Processes to effect the dehalogenation of chlorinated solvents, PCBs, PCDDs, PCDFs, PCP and other pollutants in soil, sludge, sediment and liquids. This improved technology employs hydrogen as a nucleophile to replace halogens on halogenated compounds. Hydrogen as nucleophile is not influenced by steric hinderance as with other nucleophile where complete dehalogenation of organohalogens can be achieved. This report discusses catalyzed decomposition of toxic and hazardous chemicals

  7. Sparse time-frequency decomposition based on dictionary adaptation.

    Science.gov (United States)

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  8. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    Science.gov (United States)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  9. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  10. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  11. On practical challenges of decomposition-based hybrid forecasting algorithms for wind speed and solar irradiation

    International Nuclear Information System (INIS)

    Wang, Yamin; Wu, Lei

    2016-01-01

    This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to

  12. Thermal Decomposition Behaviors and Burning Characteristics of AN/Nitramine-Based Composite Propellant

    Science.gov (United States)

    Naya, Tomoki; Kohga, Makoto

    2015-04-01

    Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.

  13. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  14. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  15. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  16. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  17. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2007-01-01

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....

  18. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E.

    2017-01-01

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors' systems, while maintaining numerical accuracy.

  19. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.

    2017-09-29

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.

  20. A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery

    Directory of Open Access Journals (Sweden)

    M. Babul Hasan

    2007-01-01

    The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.

  1. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  2. Michelson interferometer based interleaver design using classic IIR filter decomposition.

    Science.gov (United States)

    Cheng, Chi-Hao; Tang, Shasha

    2013-12-16

    An elegant method to design a Michelson interferometer based interleaver using a classic infinite impulse response (IIR) filter such as Butterworth, Chebyshev, and elliptic filters as a starting point are presented. The proposed design method allows engineers to design a Michelson interferometer based interleaver from specifications seamlessly. Simulation results are presented to demonstrate the validity of the proposed design method.

  3. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  4. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  5. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  6. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    Science.gov (United States)

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

  7. MRI Volume Fusion Based on 3D Shearlet Decompositions

    Directory of Open Access Journals (Sweden)

    Chang Duan

    2014-01-01

    Full Text Available Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

  8. Surface stress-based biosensors.

    Science.gov (United States)

    Sang, Shengbo; Zhao, Yuan; Zhang, Wendong; Li, Pengwei; Hu, Jie; Li, Gang

    2014-01-15

    Surface stress-based biosensors, as one kind of label-free biosensors, have attracted lots of attention in the process of information gathering and measurement for the biological, chemical and medical application with the development of technology and society. This kind of biosensors offers many advantages such as short response time (less than milliseconds) and a typical sensitivity at nanogram, picoliter, femtojoule and attomolar level. Furthermore, it simplifies sample preparation and testing procedures. In this work, progress made towards the use of surface stress-based biosensors for achieving better performance is critically reviewed, including our recent achievement, the optimally circular membrane-based biosensors and biosensor array. The further scientific and technological challenges in this field are also summarized. Critical remark and future steps towards the ultimate surface stress-based biosensors are addressed. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Multisensors Cooperative Detection Task Scheduling Algorithm Based on Hybrid Task Decomposition and MBPSO

    Directory of Open Access Journals (Sweden)

    Changyun Liu

    2017-01-01

    Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.

  10. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  11. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    Science.gov (United States)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  12. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    Science.gov (United States)

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  13. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    Science.gov (United States)

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  14. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  15. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  16. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    Science.gov (United States)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  17. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    Science.gov (United States)

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  18. Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition

    Directory of Open Access Journals (Sweden)

    Chunfu Wu

    2015-01-01

    Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.

  19. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  20. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  1. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  2. Image Watermarking Algorithm Based on Multiobjective Ant Colony Optimization and Singular Value Decomposition in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2013-01-01

    Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.

  3. Design of tailor-made chemical blend using a decomposition-based computer-aided approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.

    2011-01-01

    Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...

  4. Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles

    DEFF Research Database (Denmark)

    Hellman, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis

    2009-01-01

    A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro-properties, such ......A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro......-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i......) that small changes in the relative adsorption potential energies are sufficient to get a quantitative agreement between theory and experiment (Appendix A) and (ii) that it is possible to reproduce results from the first-principles model by a simple micro-kinetic model (Appendix B)....

  5. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    Science.gov (United States)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  6. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    Science.gov (United States)

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  7. Decoupling the direct and indirect effects of climate on plant litter decomposition: Accounting for stress-induced modifications in plant chemistry.

    Science.gov (United States)

    Suseela, Vidya; Tharayil, Nishanth

    2018-04-01

    Decomposition of plant litter is a fundamental ecosystem process that can act as a feedback to climate change by simultaneously influencing both the productivity of ecosystems and the flux of carbon dioxide from the soil. The influence of climate on decomposition from a postsenescence perspective is relatively well known; in particular, climate is known to regulate the rate of litter decomposition via its direct influence on the reaction kinetics and microbial physiology on processes downstream of tissue senescence. Climate can alter plant metabolism during the formative stage of tissues and could shape the final chemical composition of plant litter that is available for decomposition, and thus indirectly influence decomposition; however, these indirect effects are relatively poorly understood. Climatic stress disrupts cellular homeostasis in plants and results in the reprogramming of primary and secondary metabolic pathways, which leads to changes in the quantity, composition, and organization of small molecules and recalcitrant heteropolymers, including lignins, tannins, suberins, and cuticle within the plant tissue matrix. Furthermore, by regulating metabolism during tissue senescence, climate influences the resorption of nutrients from senescing tissues. Thus, the final chemical composition of plant litter that forms the substrate of decomposition is a combined product of presenescence physiological processes through the production and resorption of metabolites. The changes in quantity, composition, and localization of the molecular construct of the litter could enhance or hinder tissue decomposition and soil nutrient cycling by altering the recalcitrance of the lignocellulose matrix, the composition of microbial communities, and the activity of microbial exo-enzymes via various complexation reactions. Also, the climate-induced changes in the molecular composition of litter could differentially influence litter decomposition and soil nutrient cycling. Compared

  8. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    Science.gov (United States)

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  9. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  10. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    Science.gov (United States)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  11. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    Science.gov (United States)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  12. A Matrix-Free Posterior Ensemble Kalman Filter Implementation Based on a Modified Cholesky Decomposition

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2017-07-01

    Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.

  13. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  14. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  15. Exact Partial Information Decompositions for Gaussian Systems Based on Dependency Constraints

    Directory of Open Access Journals (Sweden)

    Jim W. Kay

    2018-03-01

    Full Text Available The Partial Information Decomposition, introduced by Williams P. L. et al. (2010, provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ( I dep has recently been proposed by James R. G. et al. (2017 for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the I dep approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the I dep PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the I dep PID with the minimum mutual information partial information decomposition ( I mmi , which was discussed by Barrett A. B. (2015. The results obtained using I dep appear to be more intuitive than those given with other methods, such as I mmi , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the I mmi method generally produces larger estimates of redundancy and synergy than does the I dep method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models.

  16. Polarimetric SAR interferometry-based decomposition modelling for reliable scattering retrieval

    Science.gov (United States)

    Agrawal, Neeraj; Kumar, Shashi; Tolpekin, Valentyn

    2016-05-01

    Fully Polarimetric SAR (PolSAR) data is used for scattering information retrieval from single SAR resolution cell. Single SAR resolution cell may contain contribution from more than one scattering objects. Hence, single or dual polarized data does not provide all the possible scattering information. So, to overcome this problem fully Polarimetric data is used. It was observed in previous study that fully Polarimetric data of different dates provide different scattering values for same object and coefficient of determination obtained from linear regression between volume scattering and aboveground biomass (AGB) shows different values for the SAR dataset of different dates. Scattering values are important input elements for modelling of forest aboveground biomass. In this research work an approach is proposed to get reliable scattering from interferometric pair of fully Polarimetric RADARSAT-2 data. The field survey for data collection was carried out for Barkot forest during November 10th to December 5th, 2014. Stratified random sampling was used to collect field data for circumference at breast height (CBH) and tree height measurement. Field-measured AGB was compared with the volume scattering elements obtained from decomposition modelling of individual PolSAR images and PolInSAR coherency matrix. Yamaguchi 4-component decomposition was implemented to retrieve scattering elements from SAR data. PolInSAR based decomposition was the great challenge in this work and it was implemented with certain assumptions to create Hermitian coherency matrix with co-registered polarimetric interferometric pair of SAR data. Regression analysis between field-measured AGB and volume scattering element obtained from PolInSAR data showed highest (0.589) coefficient of determination. The same regression with volume scattering elements of individual SAR images showed 0.49 and 0.50 coefficients of determination for master and slave images respectively. This study recommends use of

  17. A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Chun Wang

    2017-01-01

    Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

  18. Chaotic Multiobjective Evolutionary Algorithm Based on Decomposition for Test Task Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Hui Lu

    2014-01-01

    Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.

  19. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  20. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    Science.gov (United States)

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis

    Science.gov (United States)

    Kojima, S.; Hensley, S.

    2012-12-01

    There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume

  2. Chatter identification in milling of Inconel 625 based on recurrence plot technique and Hilbert vibration decomposition

    Directory of Open Access Journals (Sweden)

    Lajmert Paweł

    2018-01-01

    Full Text Available In the paper a cutting stability in the milling process of nickel based alloy Inconel 625 is analysed. This problem is often considered theoretically, but the theoretical finding do not always agree with experimental results. For this reason, the paper presents different methods for instability identification during real machining process. A stability lobe diagram is created based on data obtained in impact test of an end mill. Next, the cutting tests were conducted in which the axial cutting depth of cut was gradually increased in order to find a stability limit. Finally, based on the cutting force measurements the stability estimation problem is investigated using the recurrence plot technique and Hilbert vibration decomposition method.

  3. On the Dual-Decomposition-Based Resource and Power Allocation with Sleeping Strategy for Heterogeneous Networks

    KAUST Repository

    Alsharoa, Ahmad M.

    2015-05-01

    In this paper, the problem of radio and power resource management in long term evolution heterogeneous networks (LTE HetNets) is investigated. The goal is to minimize the total power consumption of the network while satisfying the user quality of service determined by each target data rate. We study the model where one macrocell base station is placed in the cell center, and multiple small cell base stations and femtocell access points are distributed around it. The dual decomposition technique is adopted to jointly optimize the power and carrier allocation in the downlink direction in addition to the selection of turned off small cell base stations. Our numerical results investigate the performance of the proposed scheme versus different system parameters and show an important saving in terms of total power consumption. © 2015 IEEE.

  4. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    Science.gov (United States)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  5. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  6. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  7. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  8. Empirical Research on China’s Carbon Productivity Decomposition Model Based on Multi-Dimensional Factors

    Directory of Open Access Journals (Sweden)

    Jianchang Lu

    2015-04-01

    Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.

  9. A demodulating approach based on local mean decomposition and its applications in mechanical fault diagnosis

    International Nuclear Information System (INIS)

    Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang

    2011-01-01

    Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery

  10. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  11. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  12. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  13. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    Science.gov (United States)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  14. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  15. WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.

    Science.gov (United States)

    Debnath, Avijit; Bhattacharjee, Nairita

    2018-05-01

    SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.

  16. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  17. Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection

    Science.gov (United States)

    Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing

    2016-04-01

    Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local

  18. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  19. Decomposition of atmospheric water content into cluster contributions based on theoretical association equilibrium constants

    International Nuclear Information System (INIS)

    Slanina, Z.

    1987-01-01

    Water vapor is treated as an equilibrium mixture of water clusters (H 2 O)/sub i/ using quantum-chemical evaluation of the equilibrium constants of water associations. The model is adapted to the conditions of atmospheric humidity, and a decomposition algorithm is suggested using the temperature and mass concentration of water as input information and used for a demonstration of evaluation of the water oligomer populations in the Earth's atmosphere. An upper limit of the populations is set up based on the water content in saturated aqueous vapor. It is proved that the cluster population in the saturated water vapor, as well as in the Earth's atmosphere for a typical temperature/humidity profile, increases with increasing temperatures

  20. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    Science.gov (United States)

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  1. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    Science.gov (United States)

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  2. Synthesis and thermal decomposition kinetics of Th(IV) complex with unsymmetrical Schiff base ligand

    International Nuclear Information System (INIS)

    Fan Yuhua; Bi Caifeng; Liu Siquan; Yang Lirong; Liu Feng; Ai Xiaokang

    2006-01-01

    A new unsymmetrical Schiff base ligand (H 2 LLi) was synthesized using L-lysine, o-vanillin and salicylaladyde. Thorium(IV) complex of this ligand [Th(H 2 L)(NO 3 )](NO 3 ) 2 x 3H 2 O have been prepared and characterized by elemental analyses, IR, UV and molar conductance. The thermal decomposition kinetics of the complex for the second stage was studied under non-isothermal condition by TG and DTG methods. The kinetic equation may be expressed as: dα/dt = A x e -E/RT x 1/2 (1-α) x [-ln(1-α)] -1 . The kinetic parameters (E, A), activation entropy ΔS ≠ and activation free-energy ΔG ≠ were also calculated. (author)

  3. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  4. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization

    NARCIS (Netherlands)

    Simonetto, A.; Jamali-Rad, H.

    2015-01-01

    Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel

  6. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  7. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    Science.gov (United States)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  8. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    Science.gov (United States)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  9. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    International Nuclear Information System (INIS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-01-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)

  10. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    Science.gov (United States)

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  11. Decomposition performance of animals as an indicator of stress acting on beech-forest ecosystems - microcosmos experiments with carbon-14-labelled litter components

    International Nuclear Information System (INIS)

    Schaefer, M.; Wolters, V.

    1988-01-01

    The effect of acid rain and heavy metals on the biotic interactions in the soil of beech forest with mull, must, and limed must was investigated with the aid of close-to-nature microcosmos systems. Parameters made use of were the decomposition of carbon-14-labelled litter components and the turnover of the microflora in C, N, and P. As the results show, increased proton uptake will bear on rearly every stage of the decomposition process in mull soils. As a result, there may be litter accumulation on the ground and first signs of humus disintegration in the mineral soil of mull soils. A direct relation between the acidity of the environment and the extent of decomposition inhibition does not exist. Despite wide-ranging impairment of edaphic animals, the activity of the ground fauna still is to be considered as the most important buffer system of soils rich in bases. Acidic condition of the beech forest soils with the humus form 'must' led to drastic inhibition of litter decomposition, to a change of the effect of edaphic animals, and to an increase in N mineralization. The grazing animals frequently aggravate the decomposition inhibition resulting from acid precipitation. The comparision of the decomposition process in a soil containing must as compared to one containing mull showed acidic soils to be on a lower biological buffer level than soils rich in bases. The main buffer capacity of acidic soils lies in the microflora, which is adapted to sudden increases in acidity and which recovers quickly. In the opinion of the authors, simple liming is not enough to increase the long-term biogenic stability of a forest ecosystem. A stabilizing effect of the fauna, for instance on nitrogen storage, is possible only if forest care measuries are carried out, for instance careful loosening of the mineral soil, which will attract earthworm species penetrating deeply into the soil. (orig./MG) With 12 refs., 6 figs [de

  12. Ambiguity attacks on robust blind image watermarking scheme based on redundant discrete wavelet transform and singular value decomposition

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2017-12-01

    Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform

  13. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  14. Structural investigation of oxovanadium(IV) Schiff base complexes: X-ray crystallography, electrochemistry and kinetic of thermal decomposition

    Czech Academy of Sciences Publication Activity Database

    Asadi, M.; Asadi, Z.; Savaripoor, N.; Dušek, Michal; Eigner, Václav; Shorkaei, M.R.; Sedaghat, M.

    2015-01-01

    Roč. 136, Feb (2015), 625-634 ISSN 1386-1425 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : Oxovanadium(IV) complexes * Schiff base * Kinetic s of thermal decomposition * Electrochemistry Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.653, year: 2015

  15. Capturing alternative secondary structures of RNA by decomposition of base-pairing probabilities.

    Science.gov (United States)

    Hagio, Taichi; Sakuraba, Shun; Iwakiri, Junichi; Mori, Ryota; Asai, Kiyoshi

    2018-02-19

    It is known that functional RNAs often switch their functions by forming different secondary structures. Popular tools for RNA secondary structures prediction, however, predict the single 'best' structures, and do not produce alternative structures. There are bioinformatics tools to predict suboptimal structures, but it is difficult to detect which alternative secondary structures are essential. We proposed a new computational method to detect essential alternative secondary structures from RNA sequences by decomposing the base-pairing probability matrix. The decomposition is calculated by a newly implemented software tool, RintW, which efficiently computes the base-pairing probability distributions over the Hamming distance from arbitrary reference secondary structures. The proposed approach has been demonstrated on ROSE element RNA thermometer sequence and Lysine RNA ribo-switch, showing that the proposed approach captures conformational changes in secondary structures. We have shown that alternative secondary structures are captured by decomposing base-paring probabilities over Hamming distance. Source code is available from http://www.ncRNA.org/RintW .

  16. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    Science.gov (United States)

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  17. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  18. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  19. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  20. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.

    2014-01-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must

  1. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    Science.gov (United States)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  2. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  3. A new approach for crude oil price analysis based on empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shou-Yang; Lai, K.K.

    2008-01-01

    The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)

  4. Research and Application of a Hybrid Forecasting Model Based on Data Decomposition for Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yuqi Dong

    2016-12-01

    Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.

  5. Multicrack Localization in Rotors Based on Proper Orthogonal Decomposition Using Fractal Dimension and Gapped Smoothing Method

    Directory of Open Access Journals (Sweden)

    Zhiwen Lu

    2016-01-01

    Full Text Available Multicrack localization in operating rotor systems is still a challenge today. Focusing on this challenge, a new approach based on proper orthogonal decomposition (POD is proposed for multicrack localization in rotors. A two-disc rotor-bearing system with breathing cracks is established by the finite element method and simulated sensors are distributed along the rotor to obtain the steady-state transverse responses required by POD. Based on the discontinuities introduced in the proper orthogonal modes (POMs at the locations of cracks, the characteristic POM (CPOM, which is sensitive to crack locations and robust to noise, is selected for cracks localization. Instead of using the CPOM directly, due to its difficulty to localize incipient cracks, damage indexes using fractal dimension (FD and gapped smoothing method (GSM are adopted, in order to extract the locations more efficiently. The method proposed in this work is validated to be effective for multicrack localization in rotors by numerical experiments on rotors in different crack configuration cases considering the effects of noise. In addition, the feasibility of using fewer sensors is also investigated.

  6. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    Science.gov (United States)

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  7. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    International Nuclear Information System (INIS)

    Han, G.; Lin, B.; Xu, Z.

    2017-01-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  8. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  9. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  10. Hierarchical prediction of industrial water demand based on refined Laspeyres decomposition analysis.

    Science.gov (United States)

    Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang

    2017-12-01

    A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.

  11. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  12. Environmental life-cycle comparisons of two polychlorinated biphenyl remediation technologies: Incineration and base catalyzed decomposition

    International Nuclear Information System (INIS)

    Hu Xintao; Zhu Jianxin; Ding Qiong

    2011-01-01

    Highlights: → We study the environmental impacts of two kinds of remediation technologies including Infrared High Temperature Incineration(IHTI) and Base Catalyzed Decomposition(BCD). → Combined midpoint/damage approaches were calculated for two technologies. → The results showed that major environmental impacts arose from energy consumption. → BCD has a lower environmental impact than IHTI in the view of single score. - Abstract: Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and

  13. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  14. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Thermal decomposition of dimethoxymethane and dimethyl carbonate catalyzed by solid acids and bases

    International Nuclear Information System (INIS)

    Fu Yuchuan; Zhu Haiyan; Shen Jianyi

    2005-01-01

    The thermal decomposition of dimethoxymethane (DMM) and dimethyl carbonate (DMC) on MgO, H-ZSM-5, SiO 2 , γ-Al 2 O 3 and ZnO was studied using a fixed bed isothermal reactor equipped with an online gas chromatograph. It was found that DMM was stable on MgO at temperatures up to 623 K, while it was decomposed over the acidic H-ZSM-5 with 99% conversion at 423 K. On the other hand, DMC was easily decomposed on the strong solid base and acid. The conversion of DMC was 76% on MgO at 473 K, and 98% on H-ZSM-5 at 423 K. It was even easier decomposed on the amphoteric γ-Al 2 O 3 . Both DMM and DMC were relatively stable on SiO 2 possessing little surface acidity and basicity. They were even more stable on ZnO with the conversion of DMM and DMC of about 1.5% at 573 K. Thus, metal oxides with either strong acidity or basicity are not suitable for the selective oxidation of DMM to DMC, while ZnO may be used as a component for the reaction

  16. THE STUDY OF SPECTRUM RECONSTRUCTION BASED ON FUZZY SET FULL CONSTRAINT AND MULTIENDMEMBER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2017-09-01

    Full Text Available Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.

  17. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework

    International Nuclear Information System (INIS)

    Jiao, A S M; Tsang, P W M; Lam, Y K; Poon, T-C; Liu, J-P; Lee, C-C

    2014-01-01

    Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique. (papers)

  18. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  19. Singular value decomposition based feature extraction technique for physiological signal analysis.

    Science.gov (United States)

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  20. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  1. Spatial and Inter-temporal Sources of Poverty, Inequality and Gender Disparities in Cameroon: a Regression-Based Decomposition Analysis

    OpenAIRE

    Boniface Ngah Epo; Francis Menjo Baye; Nadine Teme Angele Manga

    2011-01-01

    This study applies the regression-based inequality decomposition technique to explain poverty and inequality trends in Cameroon. We also identify gender related factors which explain income disparities and discrimination based on the 2001 and 2007 Cameroon household consumption surveys. The results show that education, health, employment in the formal sector, age cohorts, household size, gender, ownership of farmland and urban versus rural residence explain household economic wellbeing; dispa...

  2. Decomposition mechanism of trichloroethylene based on by-product distribution in the hybrid barrier discharge plasma process

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang-Bo [Industry Applications Research Laboratory, Korea Electrotechnology Research Institute, Changwon, Kyeongnam (Korea, Republic of); Oda, Tetsuji [Department of Electrical Engineering, The University of Tokyo, Tokyo 113-8656 (Japan)

    2007-05-15

    The hybrid barrier discharge plasma process combined with ozone decomposition catalysts was studied experimentally for decomposing dilute trichloroethylene (TCE). Based on the fundamental experiment for catalytic activities on ozone decomposition, MnO{sub 2} was selected for application in the main experiments for its higher catalytic abilities than other metal oxides. A lower initial TCE concentration existed in the working gas; the larger ozone concentration was generated from the barrier discharge plasma treatment. Near complete decomposition of dichloro-acetylchloride (DCAC) into Cl{sub 2} and CO{sub x} was observed for an initial TCE concentration of less than 250 ppm. C=C {pi} bond cleavage in TCE gave a carbon single bond of DCAC through oxidation reaction during the barrier discharge plasma treatment. Those DCAC were easily broken in the subsequent catalytic reaction. While changing oxygen concentration in working gas, oxygen radicals in the plasma space strongly reacted with precursors of DCAC compared with those of trichloro-acetaldehyde. A chlorine radical chain reaction is considered as a plausible decomposition mechanism in the barrier discharge plasma treatment. The potential energy of oxygen radicals at the surface of the catalyst is considered as an important factor in causing reactive chemical reactions.

  3. Design and cost of the sulfuric acid decomposition reactor for the sulfur based hydrogen processes - HTR2008-58009

    International Nuclear Information System (INIS)

    Hu, T. Y.; Connolly, S. M.; Lahoda, E. J.; Kriel, W.

    2008-01-01

    The key interface component between the reactor and chemical systems for the sulfuric acid based processes to make hydrogen is the sulfuric acid decomposition reactor. The materials issues for the decomposition reactor are severe since sulfuric acid must be heated, vaporized and decomposed. SiC has been identified and proven by others to be an acceptable material. However, SiC has a significant design issue when it must be interfaced with metals for connection to the remainder of the process. Westinghouse has developed a design utilizing SiC for the high temperature portions of the reactor that are in contact with the sulfuric acid and polymeric coated steel for low temperature portions. This design is expected to have a reasonable cost for an operating lifetime of 20 years. It can be readily maintained in the field, and is transportable by truck (maximum OD is 4.5 meters). This paper summarizes the detailed engineering design of the Westinghouse Decomposition Reactor and the decomposition reactor's capital cost. (authors)

  4. Performance-Based Rewards and Work Stress

    Science.gov (United States)

    Ganster, Daniel C.; Kiersch, Christa E.; Marsh, Rachel E.; Bowen, Angela

    2011-01-01

    Even though reward systems play a central role in the management of organizations, their impact on stress and the well-being of workers is not well understood. We review the literature linking performance-based reward systems to various indicators of employee stress and well-being. Well-controlled experiments in field settings suggest that certain…

  5. Thermal Decomposition Characteristics of Orthorhombic Ammonium Perchlorate (o-AP) and an 0-AP/HTPB-Based Propellant

    International Nuclear Information System (INIS)

    BEHRENS JR., RICHARD; MINIER, LEANNA M.G.

    1999-01-01

    A study to characterize the low-temperature reactive processes for o-AP and an AP/HTPB-based propellant (class 1.3) is being conducted in the laboratory using the techniques of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and scanning electron microscopy (SEM). The results presented in this paper are a follow up of the previous work that showed the overall decomposition to be complex and controlled by both physical and chemical processes. The decomposition is characterized by the occurrence of one major event that consumes up to(approx)35% of the AP, depending upon particle size, and leaves behind a porous agglomerate of AP. The major gaseous products released during this event include H(sub 2)O, O(sub 2), Cl(sub 2), N(sub 2)O and HCl. The recent efforts provide further insight into the decomposition processes for o-AP. The temporal behaviors of the gas formation rates (GFRs) for the products indicate that the major decomposition event consists of three chemical channels. The first and third channels are affected by the pressure in the reaction cell and occur at the surface or in the gas phase above the surface of the AP particles. The second channel is not affected by pressure and accounts for the solid-phase reactions characteristic of o-AP. The third channel involves the interactions of the decomposition products with the surface of the AP. SEM images of partially decomposed o-AP provide insight to how the morphology changes as the decomposition progresses. A conceptual model has been developed, based upon the STMBMS and SEM results, that provides a basic description of the processes. The thermal decomposition characteristics of the propellant are evaluated from the identities of the products and the temporal behaviors of their GFRs. First, the volatile components in the propellant evolve from the propellant as it is heated. Second, the hot AP (and HClO(sub 4)) at the AP-binder interface oxidize the binder through reactions that

  6. An ill-conditioning conformal radiotherapy analysis based on singular values decomposition

    International Nuclear Information System (INIS)

    Lefkopoulos, D.; Grandjean, P.; Bendada, S.; Dominique, C.; Platoni, K.; Schlienger, M.

    1995-01-01

    Clinical experience in stereotactic radiotherapy of irregular complex lesions had shown that optimization algorithms were necessary to improve the dose distribution. We have developed a general optimization procedure which can be applied to different conformal irradiation techniques. In this presentation this procedure is tested on the stereotactic radiotherapy modality of complex cerebral lesions treated with multi-isocentric technique based on the 'associated targets methodology'. In this inverse procedure we use the singular value decomposition (SVD) analysis which proposes several optimal solutions for the narrow beams weights of each isocentre. The SVD analysis quantifies the ill-conditioning of the dosimetric calculation of the stereotactic irradiation, using the condition number which is the ratio of the bigger to smaller singular values. Our dose distribution optimization approach consists on the study of the irradiation parameters influence on the stereotactic radiotherapy inverse problem. The adjustment of the different irradiation parameters into the 'SVD optimizer' procedure is realized taking into account the ratio of the quality reconstruction to the time calculation. It will permit a more efficient use of the 'SVD optimizer' in clinical applications for real 3D lesions. The evaluation criteria for the choice of satisfactory solutions are based on the dose-volume histograms and clinical considerations. We will present the efficiency of ''SVD optimizer'' to analyze and predict the ill-conditioning in stereotactic radiotherapy and to recognize the topography of the different beams in order to create optimal reconstructed weighting vector. The planification of stereotactic treatments using the ''SVD optimizer'' is examined for mono-isocentrically and complex dual-isocentrically treated lesions. The application of the SVD optimization technique provides conformal dose distribution for complex intracranial lesions. It is a general optimization procedure

  7. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2015-09-01

    Full Text Available In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland.

  8. Optimization of dual-energy CT acquisitions for proton therapy using projection-based decomposition.

    Science.gov (United States)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Ducros, Nicolas; Rit, Simon

    2017-09-01

    Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose

  9. Newton-Raphson based modified Laplace Adomian decomposition method for solving quadratic Riccati differential equations

    Directory of Open Access Journals (Sweden)

    Mishra Vinod

    2016-01-01

    Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.

  10. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  11. The shape of change in perceived stress, negative affect, and stress sensitivity during mindfulness based stress reduction

    NARCIS (Netherlands)

    Snippe, E.; Dziak, J.J.; Lanza, S.T.; Nyklicek, I.; Wichers, M.

    2017-01-01

    Both daily stress and the tendency to react to stress with heightened levels of negative affect (i.e., stress sensitivity) are important vulnerability factors for adverse mental health outcomes. Mindfulness-based stress reduction (MBSR) may help to reduce perceived daily stress and stress

  12. The Shape of Change in Perceived Stress, Negative Affect, and Stress Sensitivity During Mindfulness-Based Stress Reduction

    NARCIS (Netherlands)

    Snippe, Evelien; Dziak, John J.; Lanza, Stephanie T.; Nykliek, Ivan; Wichers, Marieke

    Both daily stress and the tendency to react to stress with heightened levels of negative affect (i.e., stress sensitivity) are important vulnerability factors for adverse mental health outcomes. Mindfulness-based stress reduction (MBSR) may help to reduce perceived daily stress and stress

  13. Dynamic relationships between microbial biomass, respiration, inorganic nutrients and enzyme activities: informing enzyme based decomposition models

    Directory of Open Access Journals (Sweden)

    Daryl L Moorhead

    2013-08-01

    Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.

  14. Environmental life-cycle comparisons of two polychlorinated biphenyl remediation technologies: incineration and base catalyzed decomposition.

    Science.gov (United States)

    Hu, Xintao; Zhu, Jianxin; Ding, Qiong

    2011-07-15

    Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and BCD were about 432.35 and 38.5 kg CO(2)-eq per ton PCB-containing soils, respectively. LCA results showed that the single score of BCD environmental impact was 1468.97 Pt while IHTI's score is 2785.15 Pt, which indicates BCD potentially has a lower environmental impact than IHTI technology in the PCB contaminated soil remediation process. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  16. An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data

    Directory of Open Access Journals (Sweden)

    Dingfeng Duan

    2017-10-01

    Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.

  17. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  18. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    Science.gov (United States)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  19. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  20. A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique

    Directory of Open Access Journals (Sweden)

    Bo-Ao Xu

    2016-01-01

    Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.

  1. SEBAL-based Daily Actual Evapotranspiration Forecasting using Wavelets Decomposition Analysis and Multivariate Relevance Vector Machines

    Science.gov (United States)

    Torres, A. F.

    2011-12-01

    Agricultural lands are sources of food and energy for population around the globe. These lands are vulnerable to the impacts of climate change including variations in rainfall regimes, weather patterns, and decreased availability of water for irrigation. In addition, it is not unusual that irrigated agriculture is forced to divert less water in order to make it available for other uses, e.g. human consumption and others. As part of implementation of better policies for water control and management, irrigation companies and water user associations have been implemented water conveyance and distribution monitoring systems along with soil moisture sensors networks in the last decades. These systems allow them to manage and distribute water among the users based on their requirements and water availability while collecting information about actual soil moisture conditions in representative crop fields. In spite of this, requested water deliveries by farmers/water users is based typically on total water share, traditions and past experience on irrigation, which in most cases do not correspond to the actual crop evapotranspiration, already affected by climate change. Therefore it is necessary to provide actual information about the crop water requirements to water users/managers, so they can better quantify the required vs. available water for the irrigation events along the irrigation season. To estimate the actual evapotranspiration in a spatial extent the Sensitivity Analysis of the Surface Energy Balance Algorithm for Land (SEBAL) algorithm has demonstrated its effectiveness using satellite or airborne data. Nonetheless the estimation is restricted to the day when the geospatial information was obtained. Without information of precise future daily water crop demand there is a continuous challenge for the implementation of better water distribution and management policies in the irrigation system. The purpose of this study is to investigate the plausibility of using

  2. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  3. Factors Affecting Regional Per-Capita Carbon Emissions in China Based on an LMDI Factor Decomposition Model

    Science.gov (United States)

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753

  4. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru; Hong, Fan; Peterka, Tom

    2018-01-01

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the new assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.

  5. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    International Nuclear Information System (INIS)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-01-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass

  6. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    Science.gov (United States)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  7. Single interval longwave radiation scheme based on the net exchanged rate decomposition with bracketing

    Czech Academy of Sciences Publication Activity Database

    Geleyn, J.- F.; Mašek, Jan; Brožková, Radmila; Kuma, P.; Degrauwe, D.; Hello, G.; Pristov, N.

    2017-01-01

    Roč. 143, č. 704 (2017), s. 1313-1335 ISSN 0035-9009 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : numerical weather prediction * climate models * clouds * parameterization * atmospheres * formulation * absorption * scattering * accurate * database * longwave radiative transfer * broadband approach * idealized optical paths * net exchanged rate decomposition * bracketing * selective intermittency Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.444, year: 2016

  8. Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition

    Directory of Open Access Journals (Sweden)

    Cécile Germain‐Renaud

    1999-01-01

    Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.

  9. Probabilistic inference with noisy-threshold models based on a CP tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří; Tichavský, Petr

    2014-01-01

    Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf

  10. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    OpenAIRE

    Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi

    2017-01-01

    Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...

  11. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  12. Distance-Based Functional Diversity Measures and Their Decomposition: A Framework Based on Hill Numbers

    Science.gov (United States)

    Chiu, Chun-Huo; Chao, Anne

    2014-01-01

    Hill numbers (or the “effective number of species”) are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify “the effective number of equally abundant and (functionally) equally distinct species” in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of

  13. Distance-based functional diversity measures and their decomposition: a framework based on Hill numbers.

    Directory of Open Access Journals (Sweden)

    Chun-Huo Chiu

    Full Text Available Hill numbers (or the "effective number of species" are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species, which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation measures, including N-assemblage functional

  14. Effects of magnesium-based hydrogen storage materials on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant.

    Science.gov (United States)

    Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu

    2018-01-15

    MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.

  15. Gas Sensing Analysis of Ag-Decorated Graphene for Sulfur Hexafluoride Decomposition Products Based on the Density Functional Theory

    Directory of Open Access Journals (Sweden)

    Xiaoxing Zhang

    2016-11-01

    Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.

  16. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  17. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  18. Analytical singular-value decomposition of three-dimensional, proximity-based SPECT systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Harrison H. [Arizona Univ., Tucson, AZ (United States). College of Optical Sciences; Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging; Holen, Roel van [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging

    2011-07-01

    An operator formalism is introduced for the description of SPECT imaging systems that use solid-angle effects rather than pinholes or collimators, as in recent work by Mitchell and Cherry. The object is treated as a 3D function, without discretization, and the data are 2D functions on the detectors. An analytic singular-value decomposition of the resulting integral operator is performed and used to compute the measurement and null components of the objects. The results of the theory are confirmed with a Landweber algorithm that does not require a system matrix. (orig.)

  19. Progressivity of personal income tax in Croatia: decomposition of tax base and rate effects

    Directory of Open Access Journals (Sweden)

    Ivica Urban

    2006-09-01

    Full Text Available This paper presents progressivity breakdowns for Croatian personal income tax (henceforth PIT in 1997 and 2004. The decompositions reveal how the elements of the system – tax schedule, allowances, deductions and credits – contribute to the achievement of progressivity, over the quantiles of pre-tax income distribution. Through the use of ‘single parameter’ Gini indices, the social decision maker’s (henceforth SDM relatively more or less favorable inclination toward taxpayers in the lower tails of pre-tax income distribution is accounted for. Simulations are undertaken to show how the introduction of a flat-rate system would affect progressivity.

  20. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    Directory of Open Access Journals (Sweden)

    Xiwen Qin

    2017-01-01

    Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.

  1. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)

    2014-06-15

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would

  2. Analysis of temporal-longitudinal-latitudinal characteristics in the global ionosphere based on tensor rank-1 decomposition

    Science.gov (United States)

    Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi

    2018-03-01

    Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.

  3. Harmonic analysis of traction power supply system based on wavelet decomposition

    Science.gov (United States)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  4. Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm

    Science.gov (United States)

    Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam

    2017-04-01

    The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.

  5. Image reconstruction of fluorescent molecular tomography based on the tree structured Schur complement decomposition

    Directory of Open Access Journals (Sweden)

    Wang Jiajun

    2010-05-01

    Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.

  6. Automated polyp measurement based on colon structure decomposition for CT colonography

    Science.gov (United States)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Peng, Hao; Song, Bowen; Wei, Xinzhou; Liang, Zhengrong

    2014-03-01

    Accurate assessment of colorectal polyp size is of great significance for early diagnosis and management of colorectal cancers. Due to the complexity of colon structure, polyps with diverse geometric characteristics grow from different landform surfaces. In this paper, we present a new colon decomposition approach for polyp measurement. We first apply an efficient maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation algorithm to achieve an effective electronic cleansing on colon. The global colon structure is then decomposed into different kinds of morphological shapes, e.g. haustral folds or haustral wall. Meanwhile, the polyp location is identified by an automatic computer aided detection algorithm. By integrating the colon structure decomposition with the computer aided detection system, a patch volume of colon polyps is extracted. Thus, polyp size assessment can be achieved by finding abnormal protrusion on a relative uniform morphological surface from the decomposed colon landform. We evaluated our method via physical phantom and clinical datasets. Experiment results demonstrate the feasibility of our method in consistently quantifying the size of polyp volume and, therefore, facilitating characterizing for clinical management.

  7. Nickel Oxide (NiO nanoparticles prepared by solid-state thermal decomposition of Nickel (II schiff base precursor

    Directory of Open Access Journals (Sweden)

    Aliakbar Dehno Khalaji

    2015-06-01

    Full Text Available In this paper, plate-like NiO nanoparticles were prepared by one-pot solid-state thermal decomposition of nickel (II Schiff base complex as new precursor. First, the nickel (II Schiff base precursor was prepared by solid-state grinding using nickel (II nitrate hexahydrate, Ni(NO32∙6H2O, and the Schiff base ligand N,N′-bis-(salicylidene benzene-1,4-diamine for 30 min without using any solvent, catalyst, template or surfactant. It was characterized by Fourier Transform Infrared spectroscopy (FT-IR and elemental analysis (CHN. The resultant solid was subsequently annealed in the electrical furnace at 450 °C for 3 h in air atmosphere. Nanoparticles of NiO were produced and characterized by X-ray powder diffraction (XRD at 2θ degree 0-140°, FT-IR spectroscopy, scanning electron microscopy (SEM and transmission electron microscopy (TEM. The XRD and FT-IR results showed that the product is pure and has good crystallinity with cubic structure because no characteristic peaks of impurity were observed, while the SEM and TEM results showed that the obtained product is tiny, aggregated with plate-like shape, narrow size distribution with an average size between 10-40 nm. Results show that the solid state thermal decomposition method is simple, environmentally friendly, safe and suitable for preparation of NiO nanoparticles. This method can also be used to synthesize nanoparticles of other metal oxides.

  8. Automatic screening of obstructive sleep apnea from the ECG based on empirical mode decomposition and wavelet analysis

    International Nuclear Information System (INIS)

    Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T

    2010-01-01

    This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident

  9. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  10. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  11. Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling

    DEFF Research Database (Denmark)

    Habib, Tufail

    2014-01-01

    Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...

  12. Model Reduction Based on Proper Generalized Decomposition for the Stochastic Steady Incompressible Navier--Stokes Equations

    KAUST Repository

    Tamellini, L.; Le Maî tre, O.; Nouy, A.

    2014-01-01

    In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.

  13. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    Science.gov (United States)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  14. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    Science.gov (United States)

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  15. An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods

    International Nuclear Information System (INIS)

    Zhang Hongbo; Wu Hongchun; Cao Liangzhi

    2011-01-01

    Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering

  16. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    Science.gov (United States)

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  17. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    Science.gov (United States)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  18. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  19. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  20. A non-destructive surface burn detection method for ferrous metals based on acoustic emission and ensemble empirical mode decomposition: from laser simulation to grinding process

    International Nuclear Information System (INIS)

    Yang, Zhensheng; Wu, Haixi; Yu, Zhonghua; Huang, Youfang

    2014-01-01

    Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)

  1. Multifractal features of EUA and CER futures markets by using multifractal detrended fluctuation analysis based on empirical model decomposition

    International Nuclear Information System (INIS)

    Cao, Guangxi; Xu, Wei

    2016-01-01

    Basing on daily price data of carbon emission rights in futures markets of Certified Emission Reduction (CER) and European Union Allowances (EUA), we analyze the multiscale characteristics of the markets by using empirical mode decomposition (EMD) and multifractal detrended fluctuation analysis (MFDFA) based on EMD. The complexity of the daily returns of CER and EUA futures markets changes with multiple time scales and multilayered features. The two markets also exhibit clear multifractal characteristics and long-range correlation. We employ shuffle and surrogate approaches to analyze the origins of multifractality. The long-range correlations and fat-tail distributions significantly contribute to multifractality. Furthermore, we analyze the influence of high returns on multifractality by using threshold method. The multifractality of the two futures markets is related to the presence of high values of returns in the price series.

  2. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  3. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    Science.gov (United States)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  4. Phase stability and decomposition processes in Ti-Al based intermetallics

    Energy Technology Data Exchange (ETDEWEB)

    Nakai, Kiyomichi [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ono, Toshiaki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohtsubo, Hiroyuki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohmori, Yasuya [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan)

    1995-02-28

    The high-temperature phase equilibria and the phase decomposition of {alpha} and {beta} phases were studied by crystallographic analysis of the solidification microstructures of Ti-48at.%Al and Ti-48at.%Al-2at.%X (X=Mn, Cr, Mo) alloys. The effects on the phase stability of Zr and O atoms penetrating from the specimen surface were also examined for Ti-48at.%Al and Ti-50at.%Al alloys. The third elements Cr and Mo shift the {beta} phase region to higher Al concentrations, and the {beta} phase is ordered to the {beta}{sub 2} phase. The Zr and O atoms stabilize {beta} and {alpha} phases respectively. In the Zr-stabilized {beta} phase, {alpha}{sub 2} laths form with accompanying surface relief, and stacking faults which relax the elastic strain owing to lattice deformation are introduced after formation of {alpha}{sub 2} order domains. Thus shear is thought to operate after the phase transition from {beta} to {alpha}{sub 2} by short-range diffusion. A similar analysis was conducted for the Ti-Al binary system, and the transformation was interpreted from the CCT diagram constructed qualitatively. ((orig.))

  5. Stability monitoring for BWR based on singular value decomposition method using artificial neural network

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Shimazu, Yoichiro; Michishita, Hiroshi

    2005-01-01

    A new method for evaluating the decay ratios in a boiling water reactor (BWR) using the singular value decomposition (SVD) method had been proposed. In this method, a signal component closely related to the BWR stability can be extracted from independent components of the neutron noise signal decomposed by the SVD method. However, real-time stability monitoring by the SVD method requires an efficient procedure for screening such components. For efficient screening, an artificial neural network (ANN) with three layers was adopted. The trained ANN was actually applied to decomposed components of local power range monitor (LPRM) signals that were measured in stability experiments conducted in the Ringhals-1 BWR. In each LPRM signal, multiple candidates were screened from the decomposed components. However, decay ratios could be estimated by introducing appropriate criterions for selecting the most suitable component among the candidates. The estimated decay ratios are almost identical to those evaluated by visual screening in a previous study. The selected components commonly have the largest singular value, the largest decay ratio and the least squared fitting error among the candidates. By virtue of excellent screening performance of the trained ANN, the real-time stability monitoring by the SVD method can be applied in practice. (author)

  6. Intuitive Density Functional Theory-Based Energy Decomposition Analysis for Protein-Ligand Interactions.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2017-04-11

    First-principles quantum mechanical calculations with methods such as density functional theory (DFT) allow the accurate calculation of interaction energies between molecules. These interaction energies can be dissected into chemically relevant components such as electrostatics, polarization, and charge transfer using energy decomposition analysis (EDA) approaches. Typically EDA has been used to study interactions between small molecules; however, it has great potential to be applied to large biomolecular assemblies such as protein-protein and protein-ligand interactions. We present an application of EDA calculations to the study of ligands that bind to the thrombin protein, using the ONETEP program for linear-scaling DFT calculations. Our approach goes beyond simply providing the components of the interaction energy; we are also able to provide visual representations of the changes in density that happen as a result of polarization and charge transfer, thus pinpointing the functional groups between the ligand and protein that participate in each kind of interaction. We also demonstrate with this approach that we can focus on studying parts (fragments) of ligands. The method is relatively insensitive to the protocol that is used to prepare the structures, and the results obtained are therefore robust. This is an application to a real protein drug target of a whole new capability where accurate DFT calculations can produce both energetic and visual descriptors of interactions. These descriptors can be used to provide insights for tailoring interactions, as needed for example in drug design.

  7. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  8. A medium term bulk production cost model based on decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, A.; Munoz, L. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica; Martinez-Corcoles, F.; Martin-Corrochano, V. [IBERDROLA, Madrid (Spain)

    1995-11-01

    This model provides the minimum variable cost subject to operating constraints (generation, transmission and fuel constraints). Generation constraints include power reserve margin with respect to the system peak load, first Kirchhoff`s law at each node, hydro energy scheduling, maintenance scheduling, and generation limitations. Transmission constraints cover the second Kirchhoff`s law and transmission limitations. The generation and transmission economic dispatch is approximated by the linearized (also called DC) load flow. Network losses are included as a non linear approximation. Fuel constraints include minimum consumption quotas and fuel scheduling for domestic coal thermal plants. This production costing problem is formulated as a large-scale non linear optimization problem solved by generalized Benders decomposition method. Master problem determines the inter-period decisions, i.e., maintenance, fuel and hydro scheduling, and each subproblem solves the intra-period decisions, i.e., generation and transmission economic dispatch for one period. The model has been implemented in GAMS, a mathematical programming language. An application to the large-scale Spanish electric power system is presented. 11 refs

  9. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  10. Phantom-less bone mineral density (BMD) measurement using dual energy computed tomography-based 3-material decomposition

    Science.gov (United States)

    Hofmann, Philipp; Sedlmair, Martin; Krauss, Bernhard; Wichmann, Julian L.; Bauer, Ralf W.; Flohr, Thomas G.; Mahnken, Andreas H.

    2016-03-01

    Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.

  11. Numerical simulation of ammonium dinitramide (ADN)-based non-toxic aerospace propellant decomposition and combustion in a monopropellant thruster

    International Nuclear Information System (INIS)

    Zhang, Tao; Li, Guoxiu; Yu, Yusong; Sun, Zuoyu; Wang, Meng; Chen, Jun

    2014-01-01

    Highlights: • Decomposition and combustion process of ADN-based thruster are studied. • Distribution of droplets is obtained during the process of spray hit on wire mesh. • Two temperature models are adopted to describe the heat transfer in porous media. • The influences brought by different mass flux and porosity are studied. - Abstract: Ammonium dinitramide (ADN) monopropellant is currently the most promising among all ‘green propellants’. In this paper, the decomposition and combustion process of liquid ADN-based ternary mixtures for propulsion are numerically studied. The R–R distribution model is used to study the initial boundary conditions of droplet distribution resulting from spray hit on a wire mesh based on PDA experiment. To simulate the heat-transfer characteristics between the gas–solid phases, a two-temperature porous medium model in a catalytic bed is used. An 11-species and 7-reactions chemistry model is used to study the catalytic and combustion processes. The final distribution of temperature, pressure, and other kinds of material component concentrations are obtained using the ADN thruster. The results of simulation conducted in the present study are well agree with previous experimental data, and the demonstration of the ADN thruster confirms that a good steady-state operation is achieved. The effects of spray inlet mass flux and porosity on monopropellant thruster performance are analyzed. The numerical results further show that a larger inlet mass flux results in better thruster performance and a catalytic bed porosity value of 0.5 can exhibit the best thruster performance. These findings can serve as a key reference for designing and testing non-toxic aerospace monopropellant thrusters

  12. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    Science.gov (United States)

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  13. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Carlos Reyes-Garcia

    2013-08-01

    Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  14. Quantifying immediate price impact of trades based on the k-shell decomposition of stock trading networks

    Science.gov (United States)

    Xie, Wen-Jie; Li, Ming-Xia; Xu, Hai-Chuan; Chen, Wei; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-10-01

    Traders in a stock market exchange stock shares and form a stock trading network. Trades at different positions of the stock trading network may contain different information. We construct stock trading networks based on the limit order book data and classify traders into k classes using the k-shell decomposition method. We investigate the influences of trading behaviors on the price impact by comparing a closed national market (A-shares) with an international market (B-shares), individuals and institutions, partially filled and filled trades, buyer-initiated and seller-initiated trades, and trades at different positions of a trading network. Institutional traders professionally use some trading strategies to reduce the price impact and individuals at the same positions in the trading network have a higher price impact than institutions. We also find that trades in the core have higher price impacts than those in the peripheral shell.

  15. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  16. Resident Load Influence Analysis Method for Price Based on Non-intrusive Load Monitoring and Decomposition Data

    Science.gov (United States)

    Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang

    2018-01-01

    In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.

  17. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.

    2014-05-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.

  18. Benthic algae stimulate leaf litter decomposition in detritus-based headwater streams: a case of aquatic priming effect?

    Science.gov (United States)

    Danger, Michael; Cornut, Julien; Chauvet, Eric; Chavez, Paola; Elger, Arnaud; Lecerf, Antoine

    2013-07-01

    In detritus-based ecosystems, autochthonous primary production contributes very little to the detritus pool. Yet primary producers may still influence the functioning of these ecosystems through complex interactions with decomposers and detritivores. Recent studies have suggested that, in aquatic systems, small amounts of labile carbon (C) (e.g., producer exudates), could increase the mineralization of more recalcitrant organic-matter pools (e.g., leaf litter). This process, called priming effect, should be exacerbated under low-nutrient conditions and may alter the nature of interactions among microbial groups, from competition under low-nutrient conditions to indirect mutualism under high-nutrient conditions. Theoretical models further predict that primary producers may be competitively excluded when allochthonous C sources enter an ecosystem. In this study, the effects of a benthic diatom on aquatic hyphomycetes, bacteria, and leaf litter decomposition were investigated under two nutrient levels in a factorial microcosm experiment simulating detritus-based, headwater stream ecosystems. Contrary to theoretical expectations, diatoms and decomposers were able to coexist under both nutrient conditions. Under low-nutrient conditions, diatoms increased leaf litter decomposition rate by 20% compared to treatments where they were absent. No effect was observed under high-nutrient conditions. The increase in leaf litter mineralization rate induced a positive feedback on diatom densities. We attribute these results to the priming effect of labile C exudates from primary producers. The presence of diatoms in combination with fungal decomposers also promoted decomposer diversity and, under low-nutrient conditions, led to a significant decrease in leaf litter C:P ratio that could improve secondary production. Results from our microcosm experiment suggest new mechanisms by which primary producers may influence organic matter dynamics even in ecosystems where autochthonous

  19. Structural investigation of oxovanadium(IV) Schiff base complexes: X-ray crystallography, electrochemistry and kinetic of thermal decomposition.

    Science.gov (United States)

    Asadi, Mozaffar; Asadi, Zahra; Savaripoor, Nooshin; Dusek, Michal; Eigner, Vaclav; Shorkaei, Mohammad Ranjkesh; Sedaghat, Moslem

    2015-02-05

    A series of new VO(IV) complexes of tetradentate N2O2 Schiff base ligands (L(1)-L(4)), were synthesized and characterized by FT-IR, UV-vis and elemental analysis. The structure of the complex VOL(1)⋅DMF was also investigated by X-ray crystallography which revealed a vanadyl center with distorted octahedral coordination where the 2-aza and 2-oxo coordinating sites of the ligand were perpendicular to the "-yl" oxygen. The electrochemical properties of the vanadyl complexes were investigated by cyclic voltammetry. A good correlation was observed between the oxidation potentials and the electron withdrawing character of the substituents on the Schiff base ligands, showing the following trend: MeO5-H>5-Br>5-Cl. Furthermore, the kinetic parameters of thermal decomposition were calculated by using the Coats-Redfern equation. According to the Coats-Redfern plots the kinetics of thermal decomposition of studied complexes is of the first-order in all stages, the free energy of activation for each following stage is larger than the previous one and the complexes have good thermal stability. The preparation of VOL(1)⋅DMF yielded also another compound, one kind of vanadium oxide [VO]X, with different habitus of crystals, (platelet instead of prisma) and without L(1) ligand, consisting of a V10O28 cage, diaminium moiety and dimethylamonium as a counter ions. Because its crystal structure was also new, we reported it along with the targeted complex. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  1. Amorphization of Fe-based alloy via wet mechanical alloying assisted by PCA decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Neamţu, B.V., E-mail: Bogdan.Neamtu@stm.utcluj.ro [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Chicinaş, H.F.; Marinca, T.F. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Isnard, O. [Université Grenoble Alpes, Institut NEEL, F-38042, Grenoble (France); CNRS, Institut NEEL, 25 rue des martyrs, BP166, F-38042, Grenoble (France); Pană, O. [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath Street, 400293, Cluj-Napoca (Romania); Chicinaş, I. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania)

    2016-11-01

    used as microalloying elements which could provide the required extra amount of metalloids. - Highlights: • Amorphization of Fe{sub 75}Si{sub 20}B{sub 5} alloy via wet mechanical alloying is assisted by PCA decomposition. • Powder amorphization was not achieved even after 140 de hours of dry MA. • Wet MA using different PCA leads to powder amorphization at different MA duration. • Regardless of PCA type, contamination with 2.3 wt% C is needed for amorphization.

  2. Calculation and decomposition of indirect carbon emissions from residential consumption in China based on the input–output model

    International Nuclear Information System (INIS)

    Zhu Qin; Peng Xizhe; Wu Kaiya

    2012-01-01

    Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.

  3. Identification of Diethyl 2,5-Dioxahexane Dicarboxylate and Polyethylene Carbonate as Decomposition Products of Ethylene Carbonate Based Electrolytes by Fourier Transform Infrared Spectroscopy

    KAUST Repository

    Shi, Feifei; Zhao, Hui; Liu, Gao; Ross, Philip N.; Somorjai, Gabor A.; Komvopoulos, Kyriakos

    2014-01-01

    The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.

  4. Identification of Diethyl 2,5-Dioxahexane Dicarboxylate and Polyethylene Carbonate as Decomposition Products of Ethylene Carbonate Based Electrolytes by Fourier Transform Infrared Spectroscopy

    KAUST Repository

    Shi, Feifei

    2014-07-10

    The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.

  5. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  6. Ultra-High-Speed Travelling Wave Protection of Transmission Line Using Polarity Comparison Principle Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available The traditional polarity comparison based travelling wave protection, using the initial wave information, is affected by initial fault angle, bus structure, and external fault. And the relationship between the magnitude and polarity of travelling wave is ignored. Because of the protection tripping and malfunction, the further application of this protection principle is affected. Therefore, this paper presents an ultra-high-speed travelling wave protection using integral based polarity comparison principle. After empirical mode decomposition of the original travelling wave, the first-order intrinsic mode function is used as protection object. Based on the relationship between the magnitude and polarity of travelling wave, this paper demonstrates the feasibility of using travelling wave magnitude which contains polar information as direction criterion. And the paper integrates the direction criterion in a period after fault to avoid wave head detection failure. Through PSCAD simulation with the typical 500 kV transmission system, the reliability and sensitivity of travelling wave protection were verified under different factors’ affection.

  7. Application of spectral decomposition of LIDAR-based headwind profiles in windshear detection at the Hong Kong International Airport

    Directory of Open Access Journals (Sweden)

    Tsz-Chun Wu

    2018-01-01

    Full Text Available In aviation, rapidly fluctuating headwind/tailwind may lead to high horizontal windshear, posing potential safety hazards to aircraft. So far, windshear alerts are issued by considering directly the headwind differences measured along the aircraft flight path (e.g. based on Doppler velocities from remote-sensing. In this paper, we propose and demonstrate a new methodology for windshear alerting with the technique of spectral decomposition. Through Fourier transformation of the LIDAR-based headwind profiles in 2012 and 2014 at arrival corridors 07LA and 25RA of the Hong Kong International Airport (HKIA, we study the occurrence of windshear in the spectral domain. Using a threshold-based approach, we investigate performance of single and multiple channel detection algorithms and validate the results against pilot reports. With the receiver operating characteristic (ROC diagram, we successfully demonstrate feasibility of this approach to alert windshear by showing a comparable performance of the triple channel detection algorithm and a consistent hit rate gain (07LA in particular of 4.5 to 8 % in quadruple channel detection against GLYGA, which is the currently operational algorithm in HKIA. We also observe that some length scales are particularly sensitive to windshear events which may be closely related to the local geography of HKIA. This study serves to open a new door for the methodology of windshear detection in the spectral domain for the aviation community.

  8. Experimental investigation of the catalytic decomposition and combustion characteristics of a non-toxic ammonium dinitramide (ADN)-based monopropellant thruster

    Science.gov (United States)

    Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong

    2016-12-01

    Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.

  9. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  10. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    International Nuclear Information System (INIS)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  11. Measurement and decomposition of energy efficiency of Northeast China-based on super efficiency DEA model and Malmquist index.

    Science.gov (United States)

    Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo

    2017-08-01

    Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.

  12. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  13. Dynamic Power Dispatch Considering Electric Vehicles and Wind Power Using Decomposition Based Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Boyang Qu

    2017-12-01

    Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.

  14. Web-Based and Mobile Stress Management Intervention for Employees

    DEFF Research Database (Denmark)

    Heber, E.; Lehr, D.; Ebert, D. D.

    2016-01-01

    Background: Work-related stress is highly prevalent among employees and is associated with adverse mental health consequences. Web-based interventions offer the opportunity to deliver effective solutions on a large scale; however, the evidence is limited and the results conflicting. Objective......: This randomized controlled trial evaluated the efficacy of guided Web-and mobile-based stress management training for employees. Methods: A total of 264 employees with elevated symptoms of stress (Perceived Stress Scale-10, PSS-10 >= 22) were recruited from the general working population and randomly assigned...... to an Internet-based stress management intervention (iSMI) or waitlist control group. The intervention (GET. ON Stress) was based on Lazarus's transactional model of stress, consisted of seven sessions, and applied both well-established problem solving and more recently developed emotion regulation strategies...

  15. CONFAC Decomposition Approach to Blind Identification of Underdetermined Mixtures Based on Generating Function Derivatives

    NARCIS (Netherlands)

    de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre

    This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the

  16. Kinetics of thermal decomposition and kinetics of substitution reaction of nano uranyl Schiff base complexes

    Czech Academy of Sciences Publication Activity Database

    Asadi, Z.; Zeinali, A.; Dušek, Michal; Eigner, Václav

    2014-01-01

    Roč. 46, č. 12 (2014), s. 718-729 ISSN 0538-8066 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : uranyl * Schiff base * kinetics * anticancer activity Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.517, year: 2014

  17. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  18. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    Science.gov (United States)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  19. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  20. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  1. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Seismic spectral decomposition and analysis based on Wigner–Ville distribution for sandstone reservoir characterization in West Sichuan depression

    International Nuclear Information System (INIS)

    Wu, Xiaoyang; Liu, Tianyou

    2010-01-01

    Reflections from a hydrocarbon-saturated zone are generally expected to have a tendency to be low frequency. Previous work has shown the application of seismic spectral decomposition for low-frequency shadow detection. In this paper, we further analyse the characteristics of spectral amplitude in fractured sandstone reservoirs with different fluid saturations using the Wigner–Ville distribution (WVD)-based method. We give a description of the geometric structure of cross-terms due to the bilinear nature of WVD and eliminate cross-terms using smoothed pseudo-WVD (SPWVD) with time- and frequency-independent Gaussian kernels as smoothing windows. SPWVD is finally applied to seismic data from West Sichuan depression. We focus our study on the comparison of SPWVD spectral amplitudes resulting from different fluid contents. It shows that prolific gas reservoirs feature higher peak spectral amplitude at higher peak frequency, which attenuate faster than low-quality gas reservoirs and dry or wet reservoirs. This can be regarded as a spectral attenuation signature for future exploration in the study area

  3. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  4. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  5. Assessment of autonomic nervous system by using empirical mode decomposition-based reflection wave analysis during non-stationary conditions

    International Nuclear Information System (INIS)

    Chang, C C; Hsiao, T C; Kao, S C; Hsu, H Y

    2014-01-01

    Arterial blood pressure (ABP) is an important indicator of cardiovascular circulation and presents various intrinsic regulations. It has been found that the intrinsic characteristics of blood vessels can be assessed quantitatively by ABP analysis (called reflection wave analysis (RWA)), but conventional RWA is insufficient for assessment during non-stationary conditions, such as the Valsalva maneuver. Recently, a novel adaptive method called empirical mode decomposition (EMD) was proposed for non-stationary data analysis. This study proposed a RWA algorithm based on EMD (EMD-RWA). A total of 51 subjects participated in this study, including 39 healthy subjects and 12 patients with autonomic nervous system (ANS) dysfunction. The results showed that EMD-RWA provided a reliable estimation of reflection time in baseline and head-up tilt (HUT). Moreover, the estimated reflection time is able to assess the ANS function non-invasively, both in normal, healthy subjects and in the patients with ANS dysfunction. EMD-RWA provides a new approach for reflection time estimation in non-stationary conditions, and also helps with non-invasive ANS assessment. (paper)

  6. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  7. Fast matrix factorization algorithm for DOSY based on the eigenvalue decomposition and the difference approximation focusing on the size of observed matrix

    International Nuclear Information System (INIS)

    Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki

    2017-01-01

    This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)

  8. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.

  9. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    International Nuclear Information System (INIS)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors

  10. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    Science.gov (United States)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  11. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Iron-based Nanocomposite Synthesised by Microwave Plasma Decomposition of Iron Pentacarbonyl

    Czech Academy of Sciences Publication Activity Database

    David, Bohumil; Pizúrová, Naděžda; Schneeweiss, Oldřich; Hoder, T.; Kudrle, V.; Janča, J.

    2007-01-01

    Roč. 263, - (2007), s. 147-152 ISSN 1012-0386. [Diffusion and Thermodynamics of Materials /IX/. Brno, 13.09.2006-15.09.2006] R&D Projects: GA ČR GA202/04/0221 Institutional research plan: CEZ:AV0Z20410507 Keywords : iron-based nanopowder * synthesis * microwave plasma method Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.483, year: 2005 http://www.scientific.net/3-908451-35-3/3.html

  13. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    Science.gov (United States)

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  14. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  15. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  16. Shared Reed-Muller Decision Diagram Based Thermal-Aware AND-XOR Decomposition of Logic Circuits

    Directory of Open Access Journals (Sweden)

    Apangshu Das

    2016-01-01

    Full Text Available The increased number of complex functional units exerts high power-density within a very-large-scale integration (VLSI chip which results in overheating. Power-densities directly converge into temperature which reduces the yield of the circuit. An adverse effect of power-density reduction is the increase in area. So, there is a trade-off between area and power-density. In this paper, we introduce a Shared Reed-Muller Decision Diagram (SRMDD based on fixed polarity AND-XOR decomposition to represent multioutput Boolean functions. By recursively applying transformations and reductions, we obtained a compact SRMDD. A heuristic based on Genetic Algorithm (GA increases the sharing of product terms by judicious choice of polarity of input variables in SRMDD expansion and a suitable area and power-density trade-off has been enumerated. This is the first effort ever to incorporate the power-density as a measure of temperature estimation in AND-XOR expansion process. The results of logic synthesis are incorporated with physical design in CADENCE digital synthesis tool to obtain the floor-plan silicon area and power profile. The proposed thermal-aware synthesis has been validated by obtaining absolute temperature of the synthesized circuits using HotSpot tool. We have experimented with 29 benchmark circuits. The minimized AND-XOR circuit realization shows average savings up to 15.23% improvement in silicon area and up to 17.02% improvement in temperature over the sum-of-product (SOP based logic minimization.

  17. Gold Redox Catalysis through Base-Initiated Diazonium Decomposition toward Alkene, Alkyne, and Allene Activation.

    Science.gov (United States)

    Dong, Boliang; Peng, Haihui; Motika, Stephen E; Shi, Xiaodong

    2017-08-16

    The discovery of photoassisted diazonium activation toward gold(I) oxidation greatly extended the scope of gold redox catalysis by avoiding the use of a strong oxidant. Some practical issues that limit the application of this new type of chemistry are the relative low efficiency (long reaction time and low conversion) and the strict reaction condition control that is necessary (degassing and inert reaction environment). Herein, an alternative photofree condition has been developed through Lewis base induced diazonium activation. With this method, an unreactive Au I catalyst was used in combination with Na 2 CO 3 and diazonium salts to produce a Au III intermediate. The efficient activation of various substrates, including alkyne, alkene and allene was achieved, followed by rapid Au III reductive elimination, which yielded the C-C coupling products with good to excellent yields. Relative to the previously reported photoactivation method, our approach offered greater efficiency and versatility through faster reaction rates and broader reaction scope. Challenging substrates such as electron rich/neutral allenes, which could not be activated under the photoinitiation conditions (<5 % yield), could be activated to subsequently yield the desired coupling products in good to excellent yield. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A new physics-based method for detecting weak nuclear signals via spectral decomposition

    International Nuclear Information System (INIS)

    Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei

    2012-01-01

    We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.

  19. Mindfulness-based stress reduction as a stress management intervention for healthy individuals: a systematic review.

    Science.gov (United States)

    Sharma, Manoj; Rush, Sarah E

    2014-10-01

    Stress is a global public health problem with several negative health consequences, including anxiety, depression, cardiovascular disease, and suicide. Mindfulness-based stress reduction offers an effective way of reducing stress by combining mindfulness meditation and yoga in an 8-week training program. The purpose of this study was to look at studies from January 2009 to January 2014 and examine whether mindfulness-based stress reduction is a potentially viable method for managing stress. A systematic search from Medline, CINAHL, and Alt HealthWatch databases was conducted for all types of quantitative articles involving mindfulness-based stress reduction. A total of 17 articles met the inclusion criteria. Of the 17 studies, 16 demonstrated positive changes in psychological or physiological outcomes related to anxiety and/or stress. Despite the limitations of not all studies using randomized controlled design, having smaller sample sizes, and having different outcomes, mindfulness-based stress reduction appears to be a promising modality for stress management. © The Author(s) 2014.

  20. Evaluation of stress gradient by x-ray stress measurement based on change in angle phi

    International Nuclear Information System (INIS)

    Sasaki, Toshihiko; Kuramoto, Makoto; Yoshioka, Yasuo.

    1985-01-01

    A new principle of X-ray stress evaluation for a sample with steep stress gradient has been prosed. The feature of this method is that the stress is determined by using so-called phi-method based on the change of phi-angle and thus has no effect on the penetration depth of X-rays. The procedure is as follows; firstly, an average stress within the penetration depth of X-rays is determined by changing only phi-angle under a fixed psi-angle, and then a distribution of the average stress vs. the penetration depth of X-rays is detected by repeating the similar procedure at different psi-angles. The following conclusions were found out as the result of residual stress measurements on a carbon steel of type S 55 C polished by emery paper. This method is practical enough to use for a plane stress problem. And the assumption of a linear stress gradient adopted in the authors' previous investigations is valid. In case of a triaxial stress analysis, this method is effective for the solution of three shearing stresses. However, three normal stresses can not be solved perfectly except particular psi-angles. (author)

  1. Enstrophy-based proper orthogonal decomposition of flow past rotating cylinder at super-critical rotating rate

    Science.gov (United States)

    Sengupta, Tapan K.; Gullapalli, Atchyut

    2016-11-01

    Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].

  2. Sources of energy productivity change in China during 1997–2012: A decomposition analysis based on the Luenberger productivity indicator

    International Nuclear Information System (INIS)

    Wang, Ke; Wei, Yi-Ming

    2016-01-01

    Given that different energy inputs play different roles in production and that energy policy decision making requires an evaluation of productivity change in individual energy input to provide insight into the scope for improvement of the utilization of specific energy input, this study develops, based on the Luenberger productivity indicator and data envelopment analysis models, an aggregated specific energy productivity indicator combining the individual energy input productivity indicators that account for the contributions of each specific energy input toward energy productivity change. In addition, these indicators can be further decomposed into four factors: pure efficiency change, scale efficiency change, pure technology change, and scale of technology change. These decompositions enable a determination of which specific energy input is the driving force of energy productivity change and which of the four factors is the primary contributor of energy productivity change. An empirical analysis of China's energy productivity change over the period 1997–2012 indicates that (i) China's energy productivity growth may be overestimated if energy consumption structure is omitted; (ii) in regard to the contribution of specific energy input toward energy productivity growth, oil and electricity show positive contributions, but coal and natural gas show negative contributions; (iii) energy-specific productivity changes are mainly caused by technical changes rather than efficiency changes; and (iv) the Porter Hypothesis is partially supported in China that carbon emissions control regulations may lead to energy productivity growth. - Highlights: • An energy input specific Luenberger productivity indicator is proposed. • It enables to examine the contribution of specific energy input productivity change. • It can be decomposed for identifying pure and scale efficiency changes, as well as pure and scale technical changes. • China's energy productivity growth may

  3. Development of a stress sensor based on the piezoelectric lead zirconate titanate for impact stress measurement

    Science.gov (United States)

    Liu, Yiming; Xu, Bin; Li, Lifei; Li, Bing

    2012-04-01

    The measurement of stress of concrete structures under impact loading and other strong dynamic loadings is crucial for the monitoring of health and damage detection. Due to its main advantages including availability, extremely high rigidity, high natural frequency, wide measuring range, high stability, high reproducibility, high linearity and wide operating temperature range, piezoelectric (Lead Zirconate Titanate, PZT) ceramic materials has been a widely used smart material for both sensing and actuation for monitoring and control in engineering structures. In this paper, a kind of stress sensor based on piezoelectric ceramics for impact stress measuring of concrete structures is developed. Because the PZT is fragile, in order to employ it for the health monitoring of concrete structures, special handling and treatment should be taken to protect the PZT and to make it survive and work properly in concrete. The commercially available PZT patch with lead wires is first applied with an insulation coating to prevent water and moisture damage, and then is packaged by jacketing it by two small precasted cylinder concrete blocks with enough strength to form a smart aggregate (SA). The employed PZT patch has a dimension of 10mm x 10mm x 0.3mm. In order to calibrate the PZT based stress sensor for impact stress measuring, a dropping hammer was designed and calibration test on the sensitivity of the proposed transducer was carried out with an industry charge amplifier. The voltage output of the stress sensor and the impact force under different free falling heights and impact mass were recorded with a high sampling rate data acquisition system. Based on the test measurements, the sensibility of the PZT based stress sensor was determined. Results show that the output of the PZT based stress sensor is proportional to the stress level and the repeatability of the measurement is very good. The self-made piezoelectric stress sensor can be easily embedded in concrete and provide

  4. Functional decomposition with an efficient input support selection for sub-functions based on information relationship measures

    NARCIS (Netherlands)

    Rawski, M.; Jozwiak, L.; Luba, T.

    2001-01-01

    The functional decomposition of binary and multi-valued discrete functions and relations has been gaining more and more recognition. It has important applications in many fields of modern digital system engineering, such as combinational and sequential logic synthesis for VLSI systems, pattern

  5. COMPOSITE POLYMERICADDITIVESDESIGNATED FORCONCRETEMIXES BASED ONPOLYACRYLATES, PRODUCTS OF THERMAL DECOMPOSITION OF POLYAMIDE-6 AND LOW-MOLECULAR POLYETHYLENE

    Directory of Open Access Journals (Sweden)

    Polyakov Vyacheslav Sergeevich

    2012-07-01

    4 the optimal composite additive that increases the time period of stiffening of the cement grout , improves the water resistance and the compressive strength of concrete, represents the composition of polyacrylates and polymethacrylates, products of thermal decomposition of polyamide-6 and low-molecular polyethylene in the weight ratio of 1:1:0.5.

  6. Comparison Between Wind Power Prediction Models Based on Wavelet Decomposition with Least-Squares Support Vector Machine (LS-SVM and Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Maria Grazia De Giorgi

    2014-08-01

    Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.

  7. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  8. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  9. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  10. Reliability analysis of offshore structures using OMA based fatigue stresses

    DEFF Research Database (Denmark)

    Silva Nabuco, Bruna; Aissani, Amina; Glindtvad Tarpø, Marius

    2017-01-01

    focus is on the uncertainty observed on the different stresses used to predict the damage. This uncertainty can be reduced by Modal Based Fatigue Monitoring which is a technique based on continuously measuring of the accelerations in few points of the structure with the use of accelerometers known...... points of the structure, the stress history can be calculated in any arbitrary point of the structure. The accuracy of the estimated actual stress is analyzed by experimental tests on a scale model where the obtained stresses are compared to strain gauges measurements. After evaluating the fatigue...... stresses directly from the operational response of the structure, a reliability analysis is performed in order to estimate the reliability of using Modal Based Fatigue Monitoring for long term fatigue studies....

  11. Residual Stress Analysis Based on Acoustic and Optical Methods

    Directory of Open Access Journals (Sweden)

    Sanichiro Yoshida

    2016-02-01

    Full Text Available Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea.

  12. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  13. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    Science.gov (United States)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  14. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    International Nuclear Information System (INIS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-01-01

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  15. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  16. Emergy-Based Regional Socio-Economic Metabolism Analysis: An Application of Data Envelopment Analysis and Decomposition Analysis

    OpenAIRE

    Zilong Zhang; Xingpeng Chen; Peter Heck

    2014-01-01

    Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for f...

  17. Protection from wintertime rainfall reduces nutrient losses and greenhouse gas emissions during the decomposition of poultry and horse manure-based amendments.

    Science.gov (United States)

    Maltais-Landry, Gabriel; Neufeld, Katarina; Poon, David; Grant, Nicholas; Nesic, Zoran; Smukler, Sean

    2018-04-01

    Manure-based soil amendments (herein "amendments") are important fertility sources, but differences among amendment types and management can significantly affect their nutrient value and environmental impacts. A 6-month in situ decomposition experiment was conducted to determine how protection from wintertime rainfall affected nutrient losses and greenhouse gas (GHG) emissions in poultry (broiler chicken and turkey) and horse amendments. Changes in total nutrient concentration were measured every 3 months, changes in ammonium (NH 4 + ) and nitrate (NO 3 - ) concentrations every month, and GHG emissions of carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) every 7-14 days. Poultry amendments maintained higher nutrient concentrations (except for K), higher emissions of CO 2 and N 2 O, and lower CH 4 emissions than horse amendments. Exposing amendments to rainfall increased total N and NH 4 + losses in poultry amendments, P losses in turkey and horse amendments, and K losses and cumulative N 2 O emissions for all amendments. However, it did not affect CO 2 or CH 4 emissions. Overall, rainfall exposure would decrease total N inputs by 37% (horse), 59% (broiler chicken), or 74% (turkey) for a given application rate (wet weight basis) after 6 months of decomposition, with similar losses for NH 4 + (69-96%), P (41-73%), and K (91-97%). This study confirms the benefits of facilities protected from rainfall to reduce nutrient losses and GHG emissions during amendment decomposition. The impact of rainfall protection on nutrient losses and GHG emissions was monitored during the decomposition of broiler chicken, turkey, and horse manure-based soil amendments. Amendments exposed to rainfall had large ammonium and potassium losses, resulting in a 37-74% decrease in N inputs when compared with amendments protected from rainfall. Nitrous oxide emissions were also higher with rainfall exposure, although it had no effect on carbon dioxide and methane emissions

  18. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  19. FIB-based measurement of local residual stresses on microsystems

    Science.gov (United States)

    Vogel, Dietmar; Sabate, Neus; Gollhardt, Astrid; Keller, Juergen; Auersperg, Juergen; Michel, Bernd

    2006-03-01

    The paper comprises research results obtained for stress determination on micro and nanotechnology components. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  20. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  1. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  2. Adherence to internet-based mobile-supported stress management

    DEFF Research Database (Denmark)

    Zarski, A C; Lehr, D.; Berking, M.

    2016-01-01

    of this study was to investigate the influence of different guidance formats (content-focused guidance, adherence-focused guidance, and administrative guidance) on adherence and to identify predictors of nonadherence in an Internet-based mobile-supported stress management intervention (ie, GET.ON Stress......) for employees. Methods: The data from the groups who received the intervention were pooled from three randomized controlled trials (RCTs) that evaluated the efficacy of the same Internet-based mobile-supported stress management intervention (N=395). The RCTs only differed in terms of the guidance format...... of the predictors significantly predicted nonadherence. Conclusions: Guidance has been shown to be an influential factor in promoting adherence to an Internet-based mobile-supported stress management intervention. Adherence-focused guidance, which included email reminders and feedback on demand, was equivalent...

  3. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  4. Formation of carbides and their effects on stress rupture of a Ni-base single crystal superalloy

    International Nuclear Information System (INIS)

    Liu, L.R.; Jin, T.; Zhao, N.R.; Sun, X.F.; Guan, H.R.; Hu, Z.Q.

    2003-01-01

    Creep tests of a nickel-base single crystal superalloy with minor C addition and non-carbon were carried out at different temperatures and stresses. Correlations between microstructural change and testing temperature and stress were enabled through scanning electron microscopy (SEM) and transmission electron microscopy (TEM), detailing the rafting microstucture and carbides precipitation. The results showed that minor carbon addition prolonged the second stage of creep strain curves and improved creep properties. Some carbide was precipitated during creep tests in modified alloy. M 23 C 6 carbide precipitated at lower temperature (871-982 deg. C), while (M 6 C) 2 carbide precipitated at higher temperature (>1000 deg. C), all of which was considered to be beneficial to creep properties. A small amount of MC carbide formed during solidification and its decomposition product (M 6 C) 1 were detrimental to mechanical properties, which together with micropores provided the site of initiation of cracks and led to the final fracture

  5. Comparison of the thermal decomposition processes of several aminoalcohol-based ZnO inks with one containing ethanolamine

    Energy Technology Data Exchange (ETDEWEB)

    Gómez-Núñez, Alberto [University of Barcelona, Department of Electronics, Martí i Franquès 1, E08028-Barcelona (Spain); Roura, Pere [University of Girona, Department of Physics, Campus Montilivi, Edif. PII, E17071-Girona, Catalonia (Spain); López, Concepción [University of Barcelona, Department of Inorganic Chemistry, Martí i Franquès 1, E08028-Barcelona (Spain); Vilà, Anna, E-mail: avila@el.ub.edu [University of Barcelona, Department of Electronics, Martí i Franquès 1, E08028-Barcelona (Spain)

    2016-09-15

    Highlights: • Four alternatives to ethanolamine as stabilizer for the chemical synthesis of ZnO with zinc acetate dihydrate are proposed: aminopropanol, aminomethyl butanol, aminophenol and aminobenzyl alcohol. • Thermal decomposition processes described. Nitrogen cyclic compounds result. • Molecule flexibility helps decomposition, and in particular aliphatic aminoalcohols (quite flexible) decompose the precursor at lower temperatures than aromatic ones (more rigid). • Aminopropanol, aminomethyl butanol and aminobenzyl crystallize ZnO at a lower temperature than ethanolamine. • Nitrogen cyclic specimens have been identified and evolve in all cases (included ethanolamine) at temperatures up to 600 °C. - Abstract: Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).

  6. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    Science.gov (United States)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  7. Effects of catalyst-bed’s structure parameters on decomposition and combustion characteristics of an ammonium dinitramide (ADN)-based thruster

    International Nuclear Information System (INIS)

    Yu, Yu-Song; Li, Guo-Xiu; Zhang, Tao; Chen, Jun; Wang, Meng

    2015-01-01

    Highlights: • The decomposition and combustion process is investigated by numerical method. • Heat transfer in catalyst bed is modeled using non-isothermal and radiation model. • The wall heat transfer can impact on the distribution of temperature and species. • The value of catalyst bed length, diameter and wall thickness are optimized. - Abstract: The present investigation numerically studies the evolutions of decomposition and combustion within an ADN-based thruster, and the effects of the catalyst-bed’s three structure parameters (length, diameter, and wall thickness) on the general performance of ADN-based thruster have been systematically investigated. Based upon the calculated results, it can be known that the distribution of temperature gives a Gaussian manner at the exits of the catalyst-bed and the combustion chamber, and the temperature can be obviously effected by each the three structure parameters of the catalyst-bed. With the rise of each the three structure parameter, the temperature will first increases and decreases, and there exists an optimal design value making the temperature be the highest. Via the comparison on the maximal temperature at combustion chamber’s exit and the specific impulse, it can be obtained that the wall thickness plays an important role in the influences on the general performance of ADN-based thruster while the catalyst-bed’s length has the weak effects on the general performance among the three structure parameters.

  8. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme.

    Science.gov (United States)

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-11-06

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.

  9. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  10. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  11. Polymer-based stress sensor with integrated readout

    DEFF Research Database (Denmark)

    Thaysen, Jacob; Yalcinkaya, Arda Deniz; Vettiger, P.

    2002-01-01

    softer than silicon and that a gold resistor is easily incorporated in SU-8, we have proven that a SU-8-based cantilever sensor is almost as sensitive to stress changes as the silicon piezoresistive cantilever. First, the surface stress sensing principle is discussed, from which it can be shown......, noise and device failure. The characterization shows that there is a good agreement between the expected and the obtained performance....

  12. Stochastic shock response spectrum decomposition method based on probabilistic definitions of temporal peak acceleration, spectral energy, and phase lag distributions of mechanical impact pyrotechnic shock test data

    Science.gov (United States)

    Hwang, James Ho-Jin; Duran, Adam

    2016-08-01

    Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC

  13. Proton mass decomposition

    Science.gov (United States)

    Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei

    2018-03-01

    We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.

  14. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    International Nuclear Information System (INIS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-01-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix

  15. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    Science.gov (United States)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  16. COMPOSITIONS BASED ON PALLADIUM(II AND COPPER(II COMPOUNDS, HALIDE IONS, AND BENTONITE FOR OZONE DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    T. L. Rakitskaya

    2017-05-01

    bromide ion. For Cu(II-KBr/N-Bent composition, kinetic and calculation data show that, in the presence of bromide ions, copper(II inhibits the ozone decomposition. For Pd(II-KBr/NBent composition, it has been found that the maximum activity is attained at СPd(II = 1.02·10-5 mol/g. For bimetallic Pd(II- Cu(II-KBr/N-Bent composition, changes in τ0, τ1/2, k1/2, and Q1/2 parameters depending on a Pd(II content are similar to those for monometallic Pd(II-KBr/NBent composition; however, values of the parameters are higher for the monometallic system. Thus, the inhibiting effect of Cu(II is observed even in the presence of palladium(II.

  17. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  18. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  19. Effect of mindfulness-based stress reduction on sleep quality

    DEFF Research Database (Denmark)

    Andersen, Signe; Würtzen, Hanne; Steding-Jessen, Marianne

    2013-01-01

    The prevalence of sleep disturbance is high among cancer patients, and the sleep problems tend to last for years after the end of treatment. As part of a large randomized controlled clinical trial (the MICA trial, NCT00990977) of the effect of mindfulness-based stress reduction (MBSR) on psycholo......The prevalence of sleep disturbance is high among cancer patients, and the sleep problems tend to last for years after the end of treatment. As part of a large randomized controlled clinical trial (the MICA trial, NCT00990977) of the effect of mindfulness-based stress reduction (MBSR...

  20. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  1. Stress

    Science.gov (United States)

    ... can be life-saving. But chronic stress can cause both physical and mental harm. There are at least three different types of stress: Routine stress related to the pressures of work, family, and other daily responsibilities Stress brought about ...

  2. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    Science.gov (United States)

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability

  3. Statistical model of stress corrosion cracking based on extended

    Indian Academy of Sciences (India)

    The mechanism of stress corrosion cracking (SCC) has been discussed for decades. Here I propose a model of SCC reflecting the feature of fracture in brittle manner based on the variational principle under approximately supposed thermal equilibrium. In that model the functionals are expressed with extended forms of ...

  4. Statistical model of stress corrosion cracking based on extended ...

    Indian Academy of Sciences (India)

    2016-09-07

    Sep 7, 2016 ... Abstract. In the previous paper (Pramana – J. Phys. 81(6), 1009 (2013)), the mechanism of stress corrosion cracking (SCC) based on non-quadratic form of Dirichlet energy was proposed and its statistical features were discussed. Following those results, we discuss here how SCC propagates on pipe wall ...

  5. [Value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma after TACE treatment].

    Science.gov (United States)

    Xing, Gusheng; Wang, Shuang; Li, Chenrui; Zhao, Xinming; Zhou, Chunwu

    2015-03-01

    To investigate the value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma (HCC) after transcatheter arterial chemoebolization (TACE). Consecutive 32 HCC patients with previous TACE treatment were included in this study. For the follow-up, arterial phase (AP) and venous phase (VP) dual-phase CT scans were performed with a single-source dual-energy CT scanner (Discovery CT 750HD, GE Healthcare). Iodine concentrations were derived from iodine-based material-decomposition images in the liver parenchyma, tumors and coagulation necrosis (CN) areas. The iodine concentration difference (ICD) between the arterial-phase (AP) and venal-phase (VP) were quantitatively evaluated in different tissues.The lesion-to-normal parenchyma iodine concentration ratio (LNR) was calculated. ROC analysis was performed for the qualitative evaluation, and the area under ROC (Az) was calculated to represent the diagnostic ability of ICD and LNR. In all the 32 HCC patients, the region of interesting (ROI) for iodine concentrations included liver parenchyma (n=42), tumors (n=28) and coagulation necrosis (n=24). During the AP the iodine concentration of CNs (median value 0.088 µg/mm(3)) appeared significantly higher than that of the tumors (0.064 µg/mm(3), P=0.022) and liver parenchyma (0.048 µg/mm(3), P=0.005). But it showed no significant difference between liver parenchyma and tumors (P=0.454). During the VP the iodine concentration in hepatic parenchyma (median value 0.181 µg/mm(3)) was significantly higher than that in CNs (0.140 µg/mm(3), P=0.042). There was no significant difference between liver parenchyma and tumors, CNs and tumors (both P>0.05). The median value of ICD in CNs was 0.006 µg/mm(3), significantly lower than that of the HCC (0.201 µg/mm(3), Piodine-based material decomposition images with gemstone spectral CT imaging can improve the diagnostic efficacy of CT imaging

  6. Energy efficiency of China's industry sector: An adjusted network DEA (data envelopment analysis)-based decomposition analysis

    International Nuclear Information System (INIS)

    Liu, Yingnan; Wang, Ke

    2015-01-01

    The process of energy conservation and emission reduction in China requires the specific and accurate evaluation of the energy efficiency of the industry sector because this sector accounts for 70 percent of China's total energy consumption. Previous studies have used a “black box” DEA (data envelopment analysis) model to obtain the energy efficiency without considering the inner structure of the industry sector. However, differences in the properties of energy utilization (final consumption or intermediate conversion) in different industry departments may lead to bias in energy efficiency measures under such “black box” evaluation structures. Using the network DEA model and efficiency decomposition technique, this study proposes an adjusted energy efficiency evaluation model that can characterize the inner structure and associated energy utilization properties of the industry sector so as to avoid evaluation bias. By separating the energy-producing department and energy-consuming department, this adjusted evaluation model was then applied to evaluate the energy efficiency of China's provincial industry sector. - Highlights: • An adjusted network DEA (data envelopment analysis) model for energy efficiency evaluation is proposed. • The inner structure of industry sector is taken into account for energy efficiency evaluation. • Energy final consumption and energy intermediate conversion processes are separately modeled. • China's provincial industry energy efficiency is measured through the adjusted model.

  7. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  8. A Hybrid Forecasting Model Based on Empirical Mode Decomposition and the Cuckoo Search Algorithm: A Case Study for Power Load

    Directory of Open Access Journals (Sweden)

    Jiani Heng

    2016-01-01

    Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.

  9. Design of Online Monitoring and Fault Diagnosis System for Belt Conveyors Based on Wavelet Packet Decomposition and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Wei Li

    2013-01-01

    Full Text Available Belt conveyors are the equipment widely used in coal mines and other manufacturing factories, whose main components are a number of idlers. The faults of belt conveyors can directly influence the daily production. In this paper, a fault diagnosis method combining wavelet packet decomposition (WPD and support vector machine (SVM is proposed for monitoring belt conveyors with the focus on the detection of idler faults. Since the number of the idlers could be large, one acceleration sensor is applied to gather the vibration signals of several idlers in order to reduce the number of sensors. The vibration signals are decomposed with WPD, and the energy of each frequency band is extracted as the feature. Then, the features are employed to train an SVM to realize the detection of idler faults. The proposed fault diagnosis method is firstly tested on a testbed, and then an online monitoring and fault diagnosis system is designed for belt conveyors. An experiment is also carried out on a belt conveyor in service, and it is verified that the proposed system can locate the position of the faulty idlers with a limited number of sensors, which is important for operating belt conveyors in practices.

  10. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  11. Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition

    Science.gov (United States)

    Magee, Daniel J.; Niemeyer, Kyle E.

    2018-03-01

    The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.

  12. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  13. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  14. StressPhone: smartphone based platform for measurement of cortisol for stress detection (Conference Presentation)

    Science.gov (United States)

    Jain, Aadhar; Rey, Elizabeth; Lee, Seoho; O'Dell, Dakota; Erickson, David

    2016-03-01

    Anxiety disorders are estimated to be the most common mental illness in US affecting around 40 million people and related job stress is estimated to cost US industry up to $300 billion due to lower productivity and absenteeism. A personal diagnostic device which could help identify stressed individuals would therefore be a huge boost for workforce productivity. We are therefore developing a point of care diagnostic device that can be integrated with smartphones or tablets for the measurement of cortisol - a stress related salivary biomarker, which is known to be strongly involved in body's fight-or-flight response to a stressor (physical or mental). The device is based around a competitive lateral flow assay whose results can then be read and quantified through an accessory compatible with the smartphone. In this presentation, we report the development and results of such an assay and the integrated device. We then present the results of a study relating the diurnal patterns of cortisol levels and the alertness of an individual based on the circadian rhythm and sleep patterns of the individual. We hope to use the insight provided by combining the information provided by levels of stress related to chemical biomarkers of the individual with the physical biomarkers to lead to a better informed and optimized activity schedule for maximized work output.

  15. Analysis of spinodal decomposition in Fe-32 and 40 at.% Cr alloys using phase field method based on linear and nonlinear Cahn-Hilliard equations

    Directory of Open Access Journals (Sweden)

    Orlando Soriano-Vargas

    2016-12-01

    Full Text Available Spinodal decomposition was studied during aging of Fe-Cr alloys by means of the numerical solution of the linear and nonlinear Cahn-Hilliard differential partial equations using the explicit finite difference method. Results of the numerical simulation permitted to describe appropriately the mechanism, morphology and kinetics of phase decomposition during the isothermal aging of these alloys. The growth kinetics of phase decomposition was observed to occur very slowly during the early stages of aging and it increased considerably as the aging progressed. The nonlinear equation was observed to be more suitable for describing the early stages of spinodal decomposition than the linear one.

  16. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  17. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  20. Evaluation of an app-based stress protocol

    Directory of Open Access Journals (Sweden)

    Noeh Claudius

    2016-09-01

    Full Text Available Stress is a major influence on the quality of life in our fast-moving society. This paper describes a standardized and contemporary protocol that is capable of inducing moderate psychological stress in a laboratory setting. Furthermore, it evaluates its effects on physiological biomarkers. The protocol called “THM-Stresstest” mainly consists of a rest period (30 min, an app-based stress test under the surveillance of an audience (4 min and a regeneration period (32 min. We investigated 12 subjects to evaluate the developed protocol. We could show significant changes in heart rate variability, electromyography, electro dermal activity and salivary cortisol and α-amylase. From this data we conclude that the THM-Stresstest can serve as a psychobiological tool for provoking responses in the cardiovascular-, the endocrine and exocrine system as well as the sympathetic part of the central nervous system.

  1. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    Science.gov (United States)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  2. Prion-based memory of heat stress in yeast.

    Science.gov (United States)

    Chernova, Tatiana A; Chernoff, Yury O; Wilkinson, Keith D

    2017-05-04

    Amyloids and amyloid-based prions are self-perpetuating protein aggregates which can spread by converting a normal protein of the same sequence into a prion form. They are associated with diseases in humans and mammals, and control heritable traits in yeast and other fungi. Some amyloids are implicated in biologically beneficial processes. As prion formation generates reproducible memory of a conformational change, prions can be considered as molecular memory devices.  We have demonstrated that in yeast, stress-inducible cytoskeleton-associated protein Lsb2 forms a metastable prion in response to high temperature. This prion promotes conversion of other proteins into prions and can persist in a fraction of cells for a significant number of cell generations after stress, thus maintaining the memory of stress in a population of surviving cells. Acquisition of an amino acid substitution required for Lsb2 to form a prion coincides with acquisition of increased thermotolerance in the evolution of Saccharomyces yeast. Thus the ability to form an Lsb2 prion in response to stress coincides with yeast adaptation to growth at higher temperatures. These findings intimately connect prion formation to the cellular response to environmental stresses.

  3. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  4. Stress-based Variable-inductor for Electronic Ballasts

    DEFF Research Database (Denmark)

    Zhang, Lihui; Xia, Yongming; Lu, Kaiyuan

    2015-01-01

    Current-controlled variable inductors adjust the inductance of an alternating current (ac) coil by applying a controlled dc current to saturate the iron cores of the ac coil. The controlled dc current has to be maintained during operation, which results in increased power losses. This paper prese......-based variable inductor concept is validated using a 3-D finite-element analysis. A prototype was manufactured, and the experimental results are presented. A linear relationship between inductance and applied stress can be achieved.......Current-controlled variable inductors adjust the inductance of an alternating current (ac) coil by applying a controlled dc current to saturate the iron cores of the ac coil. The controlled dc current has to be maintained during operation, which results in increased power losses. This paper...... presents a new stress-based variable inductor to control inductance using the inverse magnetostrictive effect of a magnetostrictive material. The stress can be applied by a piezoelectrical material, and thus a voltage-controlled variable inductor can be realized with zero-power consumption. The new stress...

  5. Stress corrosion crack tip microstructure in nickel-based alloys

    International Nuclear Information System (INIS)

    Shei, S.A.; Yang, W.J.

    1994-04-01

    Stress corrosion cracking behavior of several nickel-base alloys in high temperature caustic environments has been evaluated. The crack tip and fracture surfaces were examined using Auger/ESCA and Analytical Electron Microscopy (AEM) to determine the near crack tip microstructure and microchemistry. Results showed formation of chromium-rich oxides at or near the crack tip and nickel-rich de-alloying layers away from the crack tip. The stress corrosion resistance of different nickel-base alloys in caustic may be explained by the preferential oxidation and dissolution of different alloying elements at the crack tip. Alloy 600 (UNS N06600) shows good general corrosion and intergranular attack resistance in caustic because of its high nickel content. Thermally treated Alloy 690 (UNS N06690) and Alloy 600 provide good stress corrosion cracking resistance because of high chromium contents along grain boundaries. Alloy 625 (UNS N06625) does not show as good stress corrosion cracking resistance as Alloy 690 or Alloy 600 because of its high molybdenum content

  6. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  7. Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD and Its Application to the Extraction of Bamboo Forest

    Directory of Open Access Journals (Sweden)

    Guang Liu

    2016-12-01

    Full Text Available Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than

  8. Self-guided internet-based and mobile-based stress management for employees

    DEFF Research Database (Denmark)

    Ebert, D. D.; Heber, E.; Berking, M.

    2016-01-01

    Objective This randomised controlled trial (RCT) aimed to evaluate the efficacy of a self-guided internet-based stress management intervention (iSMI) for employees compared to a 6-month wait-list control group (WLC) with full access for both groups to treatment as usual. M e t h o d A sample of 264...... of stressed employees. Internet-based self-guided interventions could be an acceptable, effective and potentially costeffective approach to reduce the negative consequences associated with work-related stress....

  9. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    Science.gov (United States)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  10. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  11. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  12. Effect of dislocations on spinodal decomposition in Fe-Cr alloys

    International Nuclear Information System (INIS)

    Li Yongsheng; Li Shuxiao; Zhang Tongyi

    2009-01-01

    Phase-field simulations of spinodal decomposition in Fe-Cr alloys with dislocations were performed by using the Cahn-Hilliard diffusion equation. The stress field of dislocations was calculated in real space via Stroh's formalism, while the composition inhomogeneity-induced stress field and the diffusion equation were numerically calculated in Fourier space. The simulation results indicate that dislocation stress field facilitates, energetically and kinetically, spinodal decomposition, making the phase separation faster and the separated phase particles bigger at and near the dislocation core regions. A tilt grain boundary is thus a favorable place for spinodal decomposition, resulting in a special microstructure morphology, especially at the early stage of decomposition.

  13. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  14. State of charge estimation for lithium-ion pouch batteries based on stress measurement

    International Nuclear Information System (INIS)

    Dai, Haifeng; Yu, Chenchen; Wei, Xuezhe; Sun, Zechang

    2017-01-01

    State of charge (SOC) estimation is one of the important tasks of battery management system (BMS). Being different from other researches, a novel method of SOC estimation for pouch lithium-ion battery cells based on stress measurement is proposed. With a comprehensive experimental study, we find that, the stress of the battery during charge/discharge is composed of the static stress and the dynamic stress. The static stress, which is the measured stress in equilibrium state, corresponds to SOC, this phenomenon facilitates the design of our stress-based SOC estimation. The dynamic stress, on the other hand, is influenced by multiple factors including charge accumulation or depletion, current and historical operation, thus a multiple regression model of the dynamic stress is established. Based on the relationship between static stress and SOC, as well as the dynamic stress modeling, the SOC estimation method is founded. Experimental results show that the stress-based method performs well with a good accuracy, and this method offers a novel perspective for SOC estimation. - Highlights: • A State of Charge estimator based on stress measurement is proposed. • The stress during charge and discharge is investigated with comprehensive experiments. • Effects of SOC, current, and operation history on battery stress are well studied. • A multiple regression model of the dynamic stress is established.

  15. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang; Yang, Chao; Cai, Xiaochuan; Keyes, David E.

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation

  16. LMDI Decomposition of Energy-Related CO2 Emissions Based on Energy and CO2 Allocation Sankey Diagrams: The Method and an Application to China

    Directory of Open Access Journals (Sweden)

    Linwei Ma

    2018-01-01

    Full Text Available This manuscript develops a logarithmic mean Divisia index I (LMDI decomposition method based on energy and CO2 allocation Sankey diagrams to analyze the contributions of various influencing factors to the growth of energy-related CO2 emissions on a national level. Compared with previous methods, we can further consider the influences of energy supply efficiency. Two key parameters, the primary energy quantity converted factor (KPEQ and the primary carbon dioxide emission factor (KC, were introduced to calculate the equilibrium data for the whole process of energy unitization and related CO2 emissions. The data were used to map energy and CO2 allocation Sankey diagrams. Based on these parameters, we built an LMDI method with a higher technical resolution and applied it to decompose the growth of energy-related CO2 emissions in China from 2004 to 2014. The results indicate that GDP growth per capita is the main factor driving the growth of CO2 emissions while the reduction of energy intensity, the improvement of energy supply efficiency, and the introduction of non-fossil fuels in heat and electricity generation slowed the growth of CO2 emissions.

  17. Analysis of influence mechanism of energy-related carbon emissions in Guangdong: evidence from regional China based on the input-output and structural decomposition analysis.

    Science.gov (United States)

    Wang, Changjian; Wang, Fei; Zhang, Xinlin; Deng, Haijun

    2017-11-01

    It is important to analyze the influence mechanism of energy-related carbon emissions from a regional perspective to effectively achieve reductions in energy consumption and carbon emissions in China. Based on the "energy-economy-carbon emissions" hybrid input-output analysis framework, this study conducted structural decomposition analysis (SDA) on carbon emissions influencing factors in Guangdong Province. Systems-based examination of direct and indirect drivers for regional emission is presented. (1) Direct effects analysis of influencing factors indicated that the main driving factors of increasing carbon emissions were economic and population growth. Carbon emission intensity was the main contributing factor restraining carbon emissions growth. (2) Indirect effects analysis of influencing factors showed that international and interprovincial trades significantly affected the total carbon emissions. (3) Analysis of the effects of different final demands on the carbon emissions of industrial sector indicated that the increase in carbon emission arising from international and interprovincial trades is mainly concentrated in energy- and carbon-intensive industries. (4) Guangdong had to compromise a certain amount of carbon emissions during the development of its export-oriented economy because of industry transfer arising from the economic globalization, thereby pointing to the existence of the "carbon leakage" problem. At the same time, interprovincial export and import resulted in Guangdong transferring a part of its carbon emissions to other provinces, thereby leading to the occurrence of "carbon transfer."

  18. Energy saving analysis and management modeling based on index decomposition analysis integrated energy saving potential method: Application to complex chemical processes

    International Nuclear Information System (INIS)

    Geng, Zhiqiang; Gao, Huachao; Wang, Yanqing; Han, Yongming; Zhu, Qunxiong

    2017-01-01

    Highlights: • The integrated framework that combines IDA with energy-saving potential method is proposed. • Energy saving analysis and management framework of complex chemical processes is obtained. • This proposed method is efficient in energy optimization and carbon emissions of complex chemical processes. - Abstract: Energy saving and management of complex chemical processes play a crucial role in the sustainable development procedure. In order to analyze the effect of the technology, management level, and production structure having on energy efficiency and energy saving potential, this paper proposed a novel integrated framework that combines index decomposition analysis (IDA) with energy saving potential method. The IDA method can obtain the level of energy activity, energy hierarchy and energy intensity effectively based on data-drive to reflect the impact of energy usage. The energy saving potential method can verify the correctness of the improvement direction proposed by the IDA method. Meanwhile, energy efficiency improvement, energy consumption reduction and energy savings can be visually discovered by the proposed framework. The demonstration analysis of ethylene production has verified the practicality of the proposed method. Moreover, we can obtain the corresponding improvement for the ethylene production based on the demonstration analysis. The energy efficiency index and the energy saving potential of these worst months can be increased by 6.7% and 7.4%, respectively. And the carbon emissions can be reduced by 7.4–8.2%.

  19. Spatially and size selective synthesis of Fe-based nanoparticles on ordered mesoporous supports as highly active and stable catalysts for ammonia decomposition.

    Science.gov (United States)

    Lu, An-Hui; Nitz, Joerg-Joachim; Comotti, Massimiliano; Weidenthaler, Claudia; Schlichte, Klaus; Lehmann, Christian W; Terasaki, Osamu; Schüth, Ferdi

    2010-10-13

    Uniform and highly dispersed γ-Fe(2)O(3) nanoparticles with a diameter of ∼6 nm supported on CMK-5 carbons and C/SBA-15 composites were prepared via simple impregnation and thermal treatment. The nanostructures of these materials were characterized by XRD, Mössbauer spectroscopy, XPS, SEM, TEM, and nitrogen sorption. Due to the confinement effect of the mesoporous ordered matrices, γ-Fe(2)O(3) nanoparticles were fully immobilized within the channels of the supports. Even at high Fe-loadings (up to about 12 wt %) on CMK-5 carbon no iron species were detected on the external surface of the carbon support by XPS analysis and electron microscopy. Fe(2)O(3)/CMK-5 showed the highest ammonia decomposition activity of all previously described Fe-based catalysts in this reaction. Complete ammonia decomposition was achieved at 700 °C and space velocities as high as 60,000 cm(3) g(cat)(-1) h(-1). At a space velocity of 7500 cm(3) g(cat)(-1) h(-1), complete ammonia conversion was maintained at 600 °C for 20 h. After the reaction, the immobilized γ-Fe(2)O(3) nanoparticles were found to be converted to much smaller nanoparticles (γ-Fe(2)O(3) and a small fraction of nitride), which were still embedded within the carbon matrix. The Fe(2)O(3)/CMK-5 catalyst is much more active than the benchmark NiO/Al(2)O(3) catalyst at high space velocity, due to its highly developed mesoporosity. γ-Fe(2)O(3) nanoparticles supported on carbon-silica composites are structurally much more stable over extended periods of time but less active than those supported on carbon. TEM observation reveals that iron-based nanoparticles penetrate through the carbon layer and then are anchored on the silica walls, thus preventing them from moving and sintering. In this way, the stability of the carbon-silica catalyst is improved. Comparison with the silica supported iron oxide catalyst reveals that the presence of a thin layer of carbon is essential for increased catalytic activity.

  20. The vestibulocochlear bases for wartime posttraumatic stress disorder manifestations.

    Science.gov (United States)

    Tigno, T A; Armonda, R A; Bell, R S; Severson, M A

    2017-09-01

    Preliminary findings based on earlier retrospective studies of 229 wartime head injuries managed by the Walter Reed Army Medical Center (WRAMC)/National Naval Medical Center (NNMC) Neurosurgery Service during the period 2003-08 detected a threefold rise in Posttraumatic Stress Disorder (PTSD) manifestations (10.45%) among Traumatic Brain Injuries (TBI) having concomitant vestibulocochlear injuries compared to 3% for the TBI group without vestibulo-cochlear damage (VCD), prompting the authors to undertake a more focused study of the vestibulo-auditory pathway in explaining the development of posttraumatic stress disorder manifestations among the mostly Blast-exposed head-injured. The subsequent historical review of PTSD pathophysiology studies, the evidence for an expanded vestibular system and of a dominant vestibular system, the vascular vulnerability of the vestibular nerves in stress states as well as the period of cortical imprinting has led to the formation of a coherent hypotheses utilizing the vestibulocochlear pathway in understanding the development of PTSD manifestations. Neuroimaging and neurophysiologic tests to further validate the vestibulocochlear concept on the development of PTSD manifestations are proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A pilot study on mindfulness based stress reduction for smokers

    Directory of Open Access Journals (Sweden)

    Baker Timothy B

    2007-01-01

    Full Text Available Abstract Background Mindfulness means paying attention in the present moment, non-judgmentally, without commentary or decision-making. We report results of a pilot study designed to test the feasibility of using Mindfulness Based Stress Reduction (MBSR (with minor modifications as a smoking intervention. Methods MBSR instructors provided instructions in mindfulness in eight weekly group sessions. Subjects attempted smoking cessation during week seven without pharmacotherapy. Smoking abstinence was tested six weeks after the smoking quit day with carbon monoxide breath test and 7-day smoking calendars. Questionnaires were administered to evaluate changes in stress and affective distress. Results 18 subjects enrolled in the intervention with an average smoking history of 19.9 cigarettes per day for 26.4 years. At the 6-week post-quit visit, 10 of 18 subjects (56% achieved biologically confirmed 7-day point-prevalent smoking abstinence. Compliance with meditation was positively associated with smoking abstinence and decreases in stress and affective distress. Discussions and conclusion The results of this study suggest that mindfulness training may show promise for smoking cessation and warrants additional study in a larger comparative trial.

  2. A study on the thermal decomposition behavior of derivatives of 1,5-diamino-1H-tetrazole (DAT): A new family of energetic heterocyclic-based salts

    International Nuclear Information System (INIS)

    Fischer, Gerd; Holl, Gerhard; Klapoetke, Thomas M.; Weigand, Jan J.

    2005-01-01

    The thermal decomposition of the highly energetic 1,5-diamino-4-methyl-1H-tetrazolium nitrate (2b), 1,5-diamino-4-methyl-1H-tetrazolium dinitramide (2c) and 1,5-diamino-4-methyl-1H-tetrazolium azide (2d) were investigated by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). Mass spectrometry and IR spectroscopy were used to identify the gaseous products. Decomposition appears in the cases of 2c and 2d to be initiated by a proton transfer to form the corresponding acid HN 3 and HN 3 O 4 whereas in the case of 2b a methyl group transfer to MeONO 2 is observed as initial process. The gaseous products after the exothermic decomposition are comparable and are in agreement of the possible decomposition pathways discussed for the corresponding compounds. For all processes, possible decomposition schemes are presented. The decomposition temperatures of 2b and 2c are significantly higher than that of 2d and were supported by evaluation the values of the activation energy according the method of Ozawa and Kissinger

  3. Stress

    Science.gov (United States)

    ... taking care of an aging parent. With mental stress, the body pumps out hormones to no avail. Neither fighting ... with type 1 diabetes. This difference makes sense. Stress blocks the body from releasing insulin in people with type 2 ...

  4. Acid and base stress and transcriptomic responses in Bacillus subtilis.

    Science.gov (United States)

    Wilks, Jessica C; Kitko, Ryan D; Cleeton, Sarah H; Lee, Grace E; Ugwu, Chinagozi S; Jones, Brian D; BonDurant, Sandra S; Slonczewski, Joan L

    2009-02-01

    Acid and base environmental stress responses were investigated in Bacillus subtilis. B. subtilis AG174 cultures in buffered potassium-modified Luria broth were switched from pH 8.5 to pH 6.0 and recovered growth rapidly, whereas cultures switched from pH 6.0 to pH 8.5 showed a long lag time. Log-phase cultures at pH 6.0 survived 60 to 100% at pH 4.5, whereas cells grown at pH 7.0 survived base induced adaptation to a more extreme acid or base, respectively. Expression indices from Affymetrix chip hybridization were obtained for 4,095 protein-encoding open reading frames of B. subtilis grown at external pH 6, pH 7, and pH 9. Growth at pH 6 upregulated acetoin production (alsDS), dehydrogenases (adhA, ald, fdhD, and gabD), and decarboxylases (psd and speA). Acid upregulated malate metabolism (maeN), metal export (czcDO and cadA), oxidative stress (catalase katA; OYE family namA), and the SigX extracytoplasmic stress regulon. Growth at pH 9 upregulated arginine catabolism (roc), which generates organic acids, glutamate synthase (gltAB), polyamine acetylation and transport (blt), the K(+)/H(+) antiporter (yhaTU), and cytochrome oxidoreductases (cyd, ctaACE, and qcrC). The SigH, SigL, and SigW regulons were upregulated at high pH. Overall, greater genetic adaptation was seen at pH 9 than at pH 6, which may explain the lag time required for growth shift to high pH. Low external pH favored dehydrogenases and decarboxylases that may consume acids and generate basic amines, whereas high external pH favored catabolism-generating acids.

  5. A Robust Iris Identification System Based on Wavelet Packet Decomposition and Local Comparisons of the Extracted Signatures

    Directory of Open Access Journals (Sweden)

    Rossant Florence

    2010-01-01

    Full Text Available Abstract This paper presents a complete iris identification system including three main stages: iris segmentation, signature extraction, and signature comparison. An accurate and robust pupil and iris segmentation process, taking into account eyelid occlusions, is first detailed and evaluated. Then, an original wavelet-packet-based signature extraction method and a novel identification approach, based on the fusion of local distance measures, are proposed. Performance measurements validating the proposed iris signature and demonstrating the benefit of our local-based signature comparison are provided. Moreover, an exhaustive evaluation of robustness, with regards to the acquisition conditions, attests the high performances and the reliability of our system. Tests have been conducted on two different databases, the well-known CASIA database (V3 and our ISEP database. Finally, a comparison of the performances of our system with the published ones is given and discussed.

  6. A memory-based model of posttraumatic stress disorder

    DEFF Research Database (Denmark)

    Rubin, David C.; Berntsen, Dorthe; Johansen, Marlene Klindt

    2008-01-01

    In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the Diagnostic and Statistical Manual of Mental Disorders (4th ed......., text rev.; American Psychiatric Association, 2000). The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs...

  7. Stochastic Economic Dispatch with Wind using Versatile Probability Distribution and L-BFGS-B Based Dual Decomposition

    DEFF Research Database (Denmark)

    Huang, Shaojun; Sun, Yuanzhang; Wu, Qiuwei

    2018-01-01

    This paper focuses on economic dispatch (ED) in power systems with intermittent wind power, which is a very critical issue in future power systems. A stochastic ED problem is formed based on the recently proposed versatile probability distribution (VPD) of wind power. The problem is then analyzed...

  8. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  9. Spectral Tensor-Train Decomposition

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.

    2016-01-01

    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...... adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online (http://pypi.python.org/pypi/TensorToolbox/)....

  10. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  11. Effectiveness of Mindfulness-Based Stress Reduction (MBSR In Stress and Fatigue in Patients with Multiple Sclerosis (MS

    Directory of Open Access Journals (Sweden)

    Ebrahimi Alisaleh

    2016-07-01

    Full Text Available Multiple sclerosis (MS disease can lead to creation of mental and behavioral disorders such as stress and fatigue. Controlling the problems in patients is essential. Hence, this study has considered effectiveness of mindfulnessbased stress reduction in stress and fatigue symptoms in patients with multiple sclerosis (MS.this study is in kind of semi-experimental research in form of pretest posttest pattern with control group. Statistical population of the study consists of all patients with multiple sclerosis referred to Iran MS Association by 2016. Sampling method in this study is available sampling and based on having inclusion criteria. among patients who gained point higher than 21.8 in stress inventory and point higher than 5.1 in fatigue inventory, 30 people are selected as sample randomly and are placed in 2 groups with 15 people in each group. The experimental group was placed under mindfulnessbased stress reduction (MBSR training course including 8 sessions with 2hrs per session. k\\however, no intervention was done in control group. All patients in experimental and control groups fulfilled stress and fatigue inventories before and after intervention. obtained data was analyzed using MANCOVA and in SPSS22 software. obtained results show that there is significant difference between the two groups in terms of stress and fatigue after intervention (p<0.001.according to obtained results, it could be found that treatment method of mindfulness-based stress reduction can help reduction of symptoms of stress and fatigue in patients with MS.

  12. Effects of endogenous factors on regional land-use carbon emissions based on the Grossman decomposition model: a case study of Zhejiang Province, China.

    Science.gov (United States)

    Wu, Cifang; Li, Guan; Yue, Wenze; Lu, Rucheng; Lu, Zhangwei; You, Heyuan

    2015-02-01

    The impact of land-use change on greenhouse gas emissions has become a core issue in current studies on global change and carbon cycle. However, a comprehensive evaluation of the effects of land-use changes on carbon emissions is very necessary. This paper attempted to apply the Grossman decomposition model to estimate the scale, structural, and management effects of land-use carbon emissions based on final energy consumption by establishing the relationship between the types of land use and carbon emissions in energy consumption. It was shown that land-use carbon emissions increase from 169.5624 million tons in 2000 to 637.0984 million tons in 2010, with an annual average growth rate of 14.15%. Meanwhile, land-use carbon intensity increased from 17.59 t/ha in 2000 to 64.42 t/ha in 2010, with an average annual growth rate of 13.86%. The results indicated that rapid industrialization and urbanization in Zhejiang Province promptly increased urban land and industrial land, which consequently affected land-use extensive emissions. The structural and management effects did not mitigate land-use carbon emissions. By contrast, both factors evidently affected the growth of carbon emissions because of the rigid demands of energy-intensive land-use types and the absence of land management. Results called for the policy implications of optimizing land-use structures and strengthening land-use management.

  13. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    Science.gov (United States)

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Effects of Endogenous Factors on Regional Land-Use Carbon Emissions Based on the Grossman Decomposition Model: A Case Study of Zhejiang Province, China

    Science.gov (United States)

    Wu, Cifang; Li, Guan; Yue, Wenze; Lu, Rucheng; Lu, Zhangwei; You, Heyuan

    2015-02-01

    The impact of land-use change on greenhouse gas emissions has become a core issue in current studies on global change and carbon cycle. However, a comprehensive evaluation of the effects of land-use changes on carbon emissions is very necessary. This paper attempted to apply the Grossman decomposition model to estimate the scale, structural, and management effects of land-use carbon emissions based on final energy consumption by establishing the relationship between the types of land use and carbon emissions in energy consumption. It was shown that land-use carbon emissions increase from 169.5624 million tons in 2000 to 637.0984 million tons in 2010, with an annual average growth rate of 14.15 %. Meanwhile, land-use carbon intensity increased from 17.59 t/ha in 2000 to 64.42 t/ha in 2010, with an average annual growth rate of 13.86 %. The results indicated that rapid industrialization and urbanization in Zhejiang Province promptly increased urban land and industrial land, which consequently affected land-use extensive emissions. The structural and management effects did not mitigate land-use carbon emissions. By contrast, both factors evidently affected the growth of carbon emissions because of the rigid demands of energy-intensive land-use types and the absence of land management. Results called for the policy implications of optimizing land-use structures and strengthening land-use management.

  15. Extraction Method of Driver’s Mental Component Based on Empirical Mode Decomposition and Approximate Entropy Statistic Characteristic in Vehicle Running State

    Directory of Open Access Journals (Sweden)

    Shuan-Feng Zhao

    2017-01-01

    Full Text Available In the driver fatigue monitoring technology, the essence is to capture and analyze the driver behavior information, such as eyes, face, heart, and EEG activity during driving. However, ECG and EEG monitoring are limited by the installation electrodes and are not commercially available. The most common fatigue detection method is the analysis of driver behavior, that is, to determine whether the driver is tired by recording and analyzing the behavior characteristics of steering wheel and brake. The driver usually adjusts his or her actions based on the observed road conditions. Obviously the road path information is directly contained in the vehicle driving state; if you want to judge the driver’s driving behavior by vehicle driving status information, the first task is to remove the road information from the vehicle driving state data. Therefore, this paper proposes an effective intrinsic mode function selection method for the approximate entropy of empirical mode decomposition considering the characteristics of the frequency distribution of road and vehicle information and the unsteady and nonlinear characteristics of the driver closed-loop driving system in vehicle driving state data. The objective is to extract the effective component of the driving behavior information and to weaken the road information component. Finally the effectiveness of the proposed method is verified by simulating driving experiments.

  16. Energy use in the Greek manufacturing sector: A methodological framework based on physical indicators with aggregation and decomposition analysis

    International Nuclear Information System (INIS)

    Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias

    2009-01-01

    A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to 'heavy' industry; 'light' industry needs further attention by energy policy to modernize its production plants and improve its efficiency

  17. Energy use in the Greek manufacturing sector: A methodological framework based on physical indicators with aggregation and decomposition analysis

    Energy Technology Data Exchange (ETDEWEB)

    Salta, Myrsine; Polatidis, Heracles; Haralambopoulos, Dias [Energy Management Laboratory, Department of Environment, University of the Aegean, University Hill, Mytilene 81100 (Greece)

    2009-01-15

    A bottom-up methodological framework was developed and applied for the period 1985-2002, to selected manufacturing sub-sectors in Greece namely, food, beverages and tobacco, iron and steel, non-ferrous metals, non-metallic minerals and paper. Disaggregate physical data were aggregated according to their specific energy consumption (SEC) values and physical energy efficiency indicators were estimated. The Logarithmic Mean Divisia index method was also used and the effects of the production, structure and energy efficiency to changes in sub-sectoral manufacturing energy use were further assessed. Primary physical energy efficiency improved by 28% for the iron and steel and by 9% for the non-metallic minerals industries, compared to the base year 1990. For the food, beverages and tobacco and the paper sub-sectors, primary efficiency deteriorated by 20% and by 15%, respectively; finally electricity efficiency deteriorated by 7% for the non-ferrous metals. Sub-sectoral energy use is mainly driven by production output and energy efficiency changes. Sensitivity analysis showed that alternative SEC values do not influence the results whereas the selected base year is more critical for this analysis. Significant efficiency improvements refer to ''heavy'' industry; ''light'' industry needs further attention by energy policy to modernize its production plants and improve its efficiency. (author)

  18. Assessment and Decomposition of Total Factor Energy Efficiency: An Evidence Based on Energy Shadow Price in China

    Directory of Open Access Journals (Sweden)

    Peihao Lai

    2016-04-01

    Full Text Available By adopting an energy-input based directional distance function, we calculated the shadow price of four types of energy (i.e., coal, oil, gas and electricity among 30 areas in China from 1998 to 2012. Moreover, a macro-energy efficiency index in China was estimated and divided into intra-provincial technical efficiency, allocation efficiency of energy input structure and inter-provincial energy allocation efficiency. It shows that total energy efficiency has decreased in recent years, where intra-provincial energy technical efficiency drops markedly and extensive mode of energy consumption rises. However, energy structure and allocation improves slowly. Meanwhile, lacking an integrated energy market leads to the loss of energy efficiency. Further improvement of market allocation and structure adjustment play a pivotal role in the increase of energy efficiency.

  19. Design and Lab Experiment of a Stress Detection Service based on Mouse Movements

    OpenAIRE

    Kowatsch, Tobias; Wahle, Fabian; Filler, Andreas

    2017-01-01

    Workplace stress can negatively affect the health condition of employees and with it, the performance of organizations. Although there exist approaches to measure work-related stress, two major limitations are the low resolution of stress data and its obtrusive measurement. The current work applies design science research with the goal to design, implement and evaluate a Stress Detection Service (SDS) that senses the degree of work-related stress solely based on mouse movements of knowledge w...

  20. GSM base station electromagnetic radiation and oxidative stress in rats.

    Science.gov (United States)

    Yurekli, Ali Ihsan; Ozkan, Mehmed; Kalkan, Tunaya; Saybasili, Hale; Tuncel, Handan; Atukeren, Pinar; Gumustas, Koray; Seker, Selim

    2006-01-01

    The ever increasing use of cellular phones and the increasing number of associated base stations are becoming a widespread source of nonionizing electromagnetic radiation. Some biological effects are likely to occur even at low-level EM fields. In this study, a gigahertz transverse electromagnetic (GTEM) cell was used as an exposure environment for plane wave conditions of far-field free space EM field propagation at the GSM base transceiver station (BTS) frequency of 945 MHz, and effects on oxidative stress in rats were investigated. When EM fields at a power density of 3.67 W/m2 (specific absorption rate = 11.3 mW/kg), which is well below current exposure limits, were applied, MDA (malondialdehyde) level was found to increase and GSH (reduced glutathione) concentration was found to decrease significantly (p < 0.0001). Additionally, there was a less significant (p = 0.0190) increase in SOD (superoxide dismutase) activity under EM exposure.

  1. Nested grids ILU-decomposition (NGILU)

    NARCIS (Netherlands)

    Ploeg, A. van der; Botta, E.F.F.; Wubs, F.W.

    1996-01-01

    A preconditioning technique is described which shows, in many cases, grid-independent convergence. This technique only requires an ordering of the unknowns based on the different levels of multigrid, and an incomplete LU-decomposition based on a drop tolerance. The method is demonstrated on a

  2. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  3. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  4. Tensor decomposition-based unsupervised feature extraction applied to matrix products for multi-view data processing

    Science.gov (United States)

    2017-01-01

    In the current era of big data, the amount of data available is continuously increasing. Both the number and types of samples, or features, are on the rise. The mixing of distinct features often makes interpretation more difficult. However, separate analysis of individual types requires subsequent integration. A tensor is a useful framework to deal with distinct types of features in an integrated manner without mixing them. On the other hand, tensor data is not easy to obtain since it requires the measurements of huge numbers of combinations of distinct features; if there are m kinds of features, each of which has N dimensions, the number of measurements needed are as many as Nm, which is often too large to measure. In this paper, I propose a new method where a tensor is generated from individual features without combinatorial measurements, and the generated tensor was decomposed back to matrices, by which unsupervised feature extraction was performed. In order to demonstrate the usefulness of the proposed strategy, it was applied to synthetic data, as well as three omics datasets. It outperformed other matrix-based methodologies. PMID:28841719

  5. PERFORMANCE ANALYSIS BETWEEN EXPLICIT SCHEDULING AND IMPLICIT SCHEDULING OF PARALLEL ARRAY-BASED DOMAIN DECOMPOSITION USING OPENMP

    Directory of Open Access Journals (Sweden)

    MOHAMMED FAIZ ABOALMAALY

    2014-10-01

    Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.

  6. Modeling the shear rate and pressure drop in a hydrodynamic cavitation reactor with experimental validation based on KI decomposition studies.

    Science.gov (United States)

    Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente

    2015-01-01

    A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  8. Effect of the substitutional groups on the electrochemistry, kinetic of thermal decomposition and kinetic of substitution of some uranyl Schiff base complexes

    Energy Technology Data Exchange (ETDEWEB)

    Asadi, Zahra; Nasrollahi, Rahele; Ranjkeshshorkaei, Mohammad; Firuzabadi, Fahimeh Dehghani [Shiraz Univ. (Iran, Islamic Republic of). Chemistry Dept.; Dusek, Michal; Fejfarova, Karla [ASCR, Prague (Czech Republic). Inst. of Physics

    2016-05-15

    Uranyl(VI) complexes, [UO{sub 2}(X-saloph)(solvent)], where saloph denotes N,N{sup '}-bis(salicylidene)-1,2-phenylenediamine and X = NO{sub 2}, Cl, Me, H; were synthesized and characterized by 61H NMR, IR, UV-Vis spectroscopy, thermal gravimetry (TG), cyclic voltammetry, elemental analysis (C.H.N) and X-ray crystallography. X-ray crystallography of [UO{sub 2}(4-nitro-saloph)(DMF)] revealed coordination of the uranyl by the tetradentate Schiff base ligand and one solvent molecule, resulting in seven-coordinated uranium. The complex of [UO{sub 2}(4-nitro-saloph)(DMF)] was also synthesized in nano form. Transmission electron microscopy image showed nano-particles with sizes between 30 and 35 nm. The TG method and analysis of Coats-Redfern plots revealed that the kinetics of thermal decomposition of the complexes is of the first-order in all stages. The kinetics and mechanism of the exchange reaction of the coordinated solvent with tributylphosphine was investigated by spectrophotometric method. The second-order rate constants at four temperatures and the activation parameters showed an associative mechanism for all corresponding complexes with the following trend: 4-Nitro > 4-Cl > H > 4-Me. It was concluded that the steric and electronic properties of the complexes were important for the reaction rate. For analysis of anticancer properties of uranyl Schiff base complexes, cell culture and MTT assay was carried out. These results showed a reduction of jurkat cell line concentration across the complexes.

  9. Dolomite decomposition under CO2

    International Nuclear Information System (INIS)

    Guerfa, F.; Bensouici, F.; Barama, S.E.; Harabi, A.; Achour, S.

    2004-01-01

    Full text.Dolomite (MgCa (CO 3 ) 2 is one of the most abundant mineral species on the surface of the planet, it occurs in sedimentary rocks. MgO, CaO and Doloma (Phase mixture of MgO and CaO, obtained from the mineral dolomite) based materials are attractive steel-making refractories because of their potential cost effectiveness and world wide abundance more recently, MgO is also used as protective layers in plasma screen manufacture ceel. The crystal structure of dolomite was determined as rhombohedral carbonates, they are layers of Mg +2 and layers of Ca +2 ions. It dissociates depending on the temperature variations according to the following reactions: MgCa (CO 3 ) 2 → MgO + CaO + 2CO 2 .....MgCa (CO 3 ) 2 → MgO + Ca + CaCO 3 + CO 2 .....This latter reaction may be considered as a first step for MgO production. Differential thermal analysis (DTA) are used to control dolomite decomposition and the X-Ray Diffraction (XRD) was used to elucidate thermal decomposition of dolomite according to the reaction. That required samples were heated to specific temperature and holding times. The average particle size of used dolomite powders is 0.3 mm, as where, the heating temperature was 700 degree celsius, using various holding times (90 and 120 minutes). Under CO 2 dolomite decomposed directly to CaCO 3 accompanied by the formation of MgO, no evidence was offered for the MgO formation of either CaO or MgCO 3 , under air, simultaneous formation of CaCO 3 , CaO and accompanied dolomite decomposition

  10. Stress corrosion cracking of nickel base alloys characterization and prediction

    International Nuclear Information System (INIS)

    Santarini, G.; Pinard-Legry, G.

    1988-01-01

    For many years, studies have been carried out in several laboratories to characterize the IGSCC (Intergranular Stress Corrosion Cracking) behaviour of nickel base alloys in aqueous environments. For their relative shortness, CERTs (Constant Extension Rate Tests) have been extensively used, especially at the Corrosion Department of the CEA. However, up to recently, the results obtained with this method remained qualitative. This paper presents a first approach to a quantitative interpretation of CERT results. The basic datum used is the crack trace depth distribution determined on a specimen section at the end of a CERT. It is shown that this information can be used for the calculation of initiation and growth parameters which quantitatively characterize IGSCC phenomenon. Moreover, the rationale proposed should lead to the determination of intrinsic cracking parameters, and so, to in-service behaviour prediction

  11. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    Science.gov (United States)

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  12. Decomposition performance of animals as an indicator of stress acting on beech-forest ecosystems - microcosmos experiments with carbon-14-labelled litter components. Final report. Die Zersetzungsleistung der Tiere als Indikator fuer die Belastung von Buchenwald-Oekosystemen - Mikrokosmosexperimente mit sup 14 C markierten Streukomponenten. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, M.; Wolters, V.

    1988-01-01

    The effect of acid rain and heavy metals on the biotic interactions in the soil of beech forest with mull, must, and limed must was investigated with the aid of close-to-nature microcosmos systems. Parameters made use of were the decomposition of carbon-14-labelled litter components and the turnover of the microflora in C, N, and P. As the results show, increased proton uptake will bear on rearly every stage of the decomposition process in mull soils. As a result, there may be litter accumulation on the ground and first signs of humus disintegration in the mineral soil of mull soils. A direct relation between the acidity of the environment and the extent of decomposition inhibition does not exist. Despite wide-ranging impairment of edaphic animals, the activity of the ground fauna still is to be considered as the most important buffer system of soils rich in bases. Acidic condition of the beech forest soils with the humus form 'must' led to drastic inhibition of litter decomposition, to a change of the effect of edaphic animals, and to an increase in N mineralization. The grazing animals frequently aggravate the decomposition inhibition resulting from acid precipitation. The comparision of the decomposition process in a soil containing must as compared to one containing mull showed acidic soils to be on a lower biological buffer level than soils rich in bases. The main buffer capacity of acidic soils lies in the microflora, which is adapted to sudden increases in acidity and which recovers quickly. In the opinion of the authors, simple liming is not enough to increase the long-term biogenic stability of a forest ecosystem. A stabilizing effect of the fauna, for instance on nitrogen storage, is possible only if forest care measuries are carried out, for instance careful loosening of the mineral soil, which will attract earthworm species penetrating deeply into the soil. (orig./MG) With 12 refs., 6 figs.

  13. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    Science.gov (United States)

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.

  14. FABRICATION OF CNTS BY TOLUENE DECOMPOSITION IN A NEW REACTOR BASED ON AN ATMOSPHERIC PRESSURE PLASMA JET COUPLED TO A CVD SYSTEM

    Directory of Open Access Journals (Sweden)

    FELIPE RAMÍREZ-HERNÁNDEZ

    2017-03-01

    Full Text Available Here, we present a method to produce carbon nanotubes (CNTs based on the coupling between two conventional techniques used for the preparation of nanostructures: an arc-jet as a source of plasma and a chemical vapour deposition (CVD system. We call this system as an “atmospheric pressure plasma (APP-enhanced CVD” (APPE-CVD. This reactor was used to grow CNTs on non-flat aluminosilicate substrates by the decomposition of toluene (carbon source in the presence of ferrocene (as a catalyst. Both, CNTs and by-products of carbon were collected at three different temperatures (780, 820 and 860 °C in different regions of the APPE-CVD system. These samples were analysed by thermogravimetric analysis (TGA and DTG, scanning electron microscopy (SEM and Raman spectroscopy in order to determine the effect of APP on the thermal stability of the as-grown CNTs. It was found that the amount of metal catalyst in the synthesised CNTs is reduced by applying APP, being 820 °C the optimal temperature to produce CNTs with a high yield and carbon purity (95 wt. %. In contrast, when the synthesis temperature was fixed at 780 °C or 860 °C, amorphous carbon or CNTs with different structural defects, respectively, was formed through APEE-CVD reactor. We recommended the use of non-flat aluminosilicate particles as supports to increase CNT yield and facilitate the removal of deposits from the substrate surface. The approach that we implemented (to synthesise CNTs by using the APPE-CVD reactor may be useful to produce these nanostructures on a gram-scale for use in basic studies. The approach may also be scaled up for mass production.

  15. Study of the Thermal Decomposition of PFPEs Lubricants on a Thin DLC Film Using Finitely Extensible Nonlinear Elastic Potential Based Molecular Dynamics Simulation

    International Nuclear Information System (INIS)

    Deb Nath, S.K.; Deb Nath, S.K.; Wong, C.H.; Deb Nath, S.K.

    2014-01-01

    Perfluoro polyethers (PFPEs) are widely used as hard disk lubricants for protecting carbon overcoat reducing friction between the hard disk interface and the head during the movement of head during reading and writing data in the hard disk. Due to temperature rise of PFPE Zdol lubricant molecules on a DLC surface, how polar end groups are detached from lubricant molecules during coating is described considering the effect of temperatures on the bond/break density of PFPE Zdol using the coarse-grained bead spring model based on finitely extensible nonlinear elastic potential. As PFPE Z contains no polar end groups, effects of temperature on the bond/break density (number of broken bonds/total number of bonds) are not so significant like PFPE Zdol. Effects of temperature on the bond/break density of PFPE Z on DLC surface are also discussed with the help of graphical results. How bond/break phenomenon affects the end bead density of PFPE Z and PFPE Zdol on DLC surface is discussed elaborately. How the overall bond length of PFPE Zdol increases with the increase of temperature which is responsible for its decomposition is discussed with the help of graphical results. At HAMR condition, as PFPE Z and PFPE Zdol are not suitable lubricant on a hard disk surface, it needs more investigations to obtain suitable lubricant. We study the effect of breaking of bonds of nonfunctional lubricant PFPE Z, functional lubricants such as PFPE Zdol and PFPE Ztetrao, and multi dented functional lubricants such as Ar-DS, ARJ-DD, and OHJ-DS on a DLC substrate with the increase of temperature when heating of all of the lubricants on a DLC substrate is carried out isothermally using the coarse-grained bead spring model by molecular dynamics simulations and suitable lubricant is selected which is suitable on a DLC substrate at high temperature.

  16. Study of the Thermal Decomposition of PFPEs Lubricants on a Thin DLC Film Using Finitely Extensible Nonlinear Elastic Potential Based Molecular Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    S. K. Deb Nath

    2014-01-01

    Full Text Available Perfluoropolyethers (PFPEs are widely used as hard disk lubricants for protecting carbon overcoat reducing friction between the hard disk interface and the head during the movement of head during reading and writing data in the hard disk. Due to temperature rise of PFPE Zdol lubricant molecules on a DLC surface, how polar end groups are detached from lubricant molecules during coating is described considering the effect of temperatures on the bond/break density of PFPE Zdol using the coarse-grained bead spring model based on finitely extensible nonlinear elastic potential. As PFPE Z contains no polar end groups, effects of temperature on the bond/break density (number of broken bonds/total number of bonds are not so significant like PFPE Zdol. Effects of temperature on the bond/break density of PFPE Z on DLC surface are also discussed with the help of graphical results. How bond/break phenomenonaffects the end bead density of PFPE Z and PFPE Zdol on DLC surface is discussed elaborately. How the overall bond length of PFPE Zdol increases with the increase of temperature which is responsible for its decomposition is discussed with the help of graphical results. At HAMR condition, as PFPE Z and PFPE Zdol are not suitable lubricant on a hard disk surface, it needs more investigations to obtain suitable lubricant. We study the effect of breaking of bonds of nonfunctional lubricant PFPE Z, functional lubricants such as PFPE Zdol and PFPE Ztetrao, and multidented functional lubricants such as ARJ-DS, ARJ-DD, and OHJ-DS on a DLC substrate with the increase of temperature when heating of all of the lubricants on a DLC substrate is carried out isothermally using the coarse-grained bead spring model by molecular dynamics simulations and suitable lubricant is selected which is suitable on a DLC substrate at high temperature.

  17. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    Science.gov (United States)

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  18. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    Directory of Open Access Journals (Sweden)

    Chong Shen

    2016-05-01

    Full Text Available The different noise components in a dual-mass micro-electromechanical system (MEMS gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN, electronic-thermal noise (ETN, flicker noise (FN and Coriolis signal in-phase noise (IPN. The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD and time-frequency peak filtering (TFPF. There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  19. Effects of mindfulness-based stress reduction on depression, anxiety, stress and mindfulness in Korean nursing students.

    Science.gov (United States)

    Song, Yeoungsuk; Lindquist, Ruth

    2015-01-01

    Nursing students often experience depression, anxiety, stress and decreased mindfulness which may decrease their patient care effectiveness. Mindfulness-based stress reduction (MBSR) effectively reduced depression, anxiety and stress, and increased mindfulness in previous research with other populations, but there is sparse evidence regarding its effectiveness for nursing students in Korea. To examine the effects of MBSR on depression, anxiety, stress and mindfulness in Korean nursing students. A randomized controlled trial. Fifty (50) nursing students at KN University College of Nursing in South Korea were randomly assigned to two groups. Data from 44 students, MBSR (n=21) and a wait list (WL) control (n=23) were analyzed. The MBSR group practiced mindfulness meditation for 2 h every week for 8 weeks. The WL group did not receive MBSR intervention. Standardized self-administered questionnaires of depression, anxiety, stress and mindfulness were administered at the baseline prior to the MBSR program and at completion (at 8 weeks). Compared with WL participants, MBSR participants reported significantly greater decreases in depression, anxiety and stress, and greater increase in mindfulness. A program of MBSR was effective when it was used with nursing students in reducing measures of depression, anxiety and stress, and increasing their mindful awareness. MBSR shows promise for use with nursing students to address their experience of mild depression, anxiety and stress, and to increase mindfulness in academic and clinical work, warranting further study. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  1. Thermal Plasma decomposition of fluoriated greenhouse gases

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Soo Seok; Watanabe, Takayuki [Tokyo Institute of Technology, Yokohama (Japan); Park, Dong Wha [Inha University, Incheon (Korea, Republic of)

    2012-02-15

    Fluorinated compounds mainly used in the semiconductor industry are potent greenhouse gases. Recently, thermal plasma gas scrubbers have been gradually replacing conventional burn-wet type gas scrubbers which are based on the combustion of fossil fuels because high conversion efficiency and control of byproduct generation are achievable in chemically reactive high temperature thermal plasma. Chemical equilibrium composition at high temperature and numerical analysis on a complex thermal flow in the thermal plasma decomposition system are used to predict the process of thermal decomposition of fluorinated gas. In order to increase economic feasibility of the thermal plasma decomposition process, increase of thermal efficiency of the plasma torch and enhancement of gas mixing between the thermal plasma jet and waste gas are discussed. In addition, noble thermal plasma systems to be applied in the thermal plasma gas treatment are introduced in the present paper.

  2. Investigation on stresses of superconductors under pulsed magnetic fields based on multiphysics model

    International Nuclear Information System (INIS)

    Yang, Xiaobin; Li, Xiuhong; He, Yafeng; Wang, Xiaojun; Xu, Bo

    2017-01-01

    Highlights: • The differential equation including temperature and magnetic field was derived for a long cylindrical superconductor. • Thermal stress and electromagnetic stress were studied at the same time under pulse field magnetizing. • The distributions of the magnetic field, the temperature and stresses are studied and compared for two pulse fields of the different duration. • The Role thermal stress and electromagnetic stress play in the process of pulse field magnetizing is discussed. - Abstract: A multiphysics model for the numerical computation of stresses, trapped field and temperature distribution of a infinite long superconducting cylinder is proposed, based on which the stresses, including the thermal stresses and mechanical stresses due to Lorentz force, and trapped fields in the superconductor subjected to pulsed magnetic fields are analyzed. By comparing the results under pulsed magnetic fields with different pulse durations, it is found that the both the mechanical stress due to the electromagnetic force and the thermal stress due to temperature gradient contribute to the total stress level in the superconductor. For pulsed magnetic field with short durations, the thermal stress is the dominant contribution to the total stress, because the heat generated by AC-loss builds up significant temperature gradient in such short durations. However, for a pulsed field with a long duration the gradient of temperature and flux, as well as the maximal tensile stress, are much smaller. And the results of this paper is meaningful for the design and manufacture of superconducting permanent magnets.

  3. Investigation on stresses of superconductors under pulsed magnetic fields based on multiphysics model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaobin, E-mail: yangxb@lzu.edu.cn; Li, Xiuhong; He, Yafeng; Wang, Xiaojun; Xu, Bo

    2017-04-15

    Highlights: • The differential equation including temperature and magnetic field was derived for a long cylindrical superconductor. • Thermal stress and electromagnetic stress were studied at the same time under pulse field magnetizing. • The distributions of the magnetic field, the temperature and stresses are studied and compared for two pulse fields of the different duration. • The Role thermal stress and electromagnetic stress play in the process of pulse field magnetizing is discussed. - Abstract: A multiphysics model for the numerical computation of stresses, trapped field and temperature distribution of a infinite long superconducting cylinder is proposed, based on which the stresses, including the thermal stresses and mechanical stresses due to Lorentz force, and trapped fields in the superconductor subjected to pulsed magnetic fields are analyzed. By comparing the results under pulsed magnetic fields with different pulse durations, it is found that the both the mechanical stress due to the electromagnetic force and the thermal stress due to temperature gradient contribute to the total stress level in the superconductor. For pulsed magnetic field with short durations, the thermal stress is the dominant contribution to the total stress, because the heat generated by AC-loss builds up significant temperature gradient in such short durations. However, for a pulsed field with a long duration the gradient of temperature and flux, as well as the maximal tensile stress, are much smaller. And the results of this paper is meaningful for the design and manufacture of superconducting permanent magnets.

  4. A strategy for accommodating residual stresses in the assessment of repair weldments based upon measurement of near surface stresses

    International Nuclear Information System (INIS)

    Mcdonald, E.J.; Hallam, K.R.; Flewitt, P.E.J.

    2005-01-01

    On many occasions repairs are undertaken to ferritic steel weldments on plant either during construction or to remove service induced defects. These repaired weldments are subsequently put into service with or without a post-weld heat treatment. In either case, but particularly for the latter, there is a need to accommodate the associated residual stresses in structural integrity assessments such as those based upon the R6 failure avoidance procedure. Although in some circumstances the residual macro-stresses developed within weldments of components and structures can be calculated this is not so readily achieved in the case of residual stresses introduced by repair welds. There is a range of physical and mechanical techniques available to undertake the measurement of macro-residual stresses. Of these X-ray diffraction has the advantage that it is essentially non-destructive and offers the potential for evaluating stresses, which exist in the near surface layer. Although for many structural integrity assessments both the magnitude and distribution of residual stresses have to be accommodated it is not practical to make destructive measurements on weld repaired components and structures to establish the through section distribution of stresses. An approach is to derive a description of the appropriate macro-stresses by a combination of measurement and calculation on trial ferritic steel repair weldments. Surface measurements on the plant can then be made to establish the relationship between the repaired component or structure and the trial weld and thereby improve confidence in predicted stresses and their distribution from the near-surface measured values. Hence X-ray diffraction measurements at the near-surface of the plant weldment can be used to underwrite the quality of the repair by confirming the magnitude and distribution of residual stresses used for the integrity assessment to demonstrate continued safe operation

  5. Stress

    DEFF Research Database (Denmark)

    Keller, Hanne Dauer

    2015-01-01

    Kapitlet handler om stress som følelse, og det trækker primært på de få kvalitative undersøgelser, der er lavet af stressforløb.......Kapitlet handler om stress som følelse, og det trækker primært på de få kvalitative undersøgelser, der er lavet af stressforløb....

  6. Stress !!!

    OpenAIRE

    Fledderus, M.

    2012-01-01

    Twee op de vijf UT-studenten hebben last van ernstige studiestress, zo erg zelfs dat het ze in hun privéleven belemmert. Die cijfers komen overeen met het landelijk beeld van stress onder studenten. Samen met 14 andere universiteits- en hogeschoolbladen enquêteerde UT Nieuws bijna 5500 studenten. Opvallend is dat mannelijke studenten uit Twente zich veel minder druk lijken te maken over hun studie. Onder vrouwen ligt de stress juist erg hoog ten opzichte van het landelijk gemiddelde.

  7. College Students Coping with Interpersonal Stress: Examining a Control-Based Model of Coping

    Science.gov (United States)

    Coiro, Mary Jo; Bettis, Alexandra H.; Compas, Bruce E.

    2017-01-01

    Objective: The ways that college students cope with stress, particularly interpersonal stress, may be a critical factor in determining which students are at risk for impairing mental health disorders. Using a control-based model of coping, the present study examined associations between interpersonal stress, coping strategies, and symptoms.…

  8. Annealing effects on strain and stress sensitivity of polymer optical fibre based sensors

    DEFF Research Database (Denmark)

    Pospori, A.; Marques, C. A. F.; Zubel, M. G.

    2016-01-01

    The annealing effects on strain and stress sensitivity of polymer optical fibre Bragg grating sensors after their photoinscription are investigated. PMMA optical fibre based Bragg grating sensors are first photo-inscribed and then they were placed into hot water for annealing. Strain, stress...... fibre tends to increase the strain, stress and force sensitivity of the photo-inscribed sensor....

  9. Smartphone-Based Self-Assessment of Stress in Healthy Adult Individuals

    DEFF Research Database (Denmark)

    Þórarinsdóttir, Helga; Kessing, Lars Vedel; Faurholt-Jepsen, Maria

    2017-01-01

    BACKGROUND: Stress is a common experience in today's society. Smartphone ownership is widespread, and smartphones can be used to monitor health and well-being. Smartphone-based self-assessment of stress can be done in naturalistic settings and may potentially reflect real-time stress level...

  10. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  11. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  12. 12 CFR 652.100 - Audit of the risk-based capital stress test.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Audit of the risk-based capital stress test... AGRICULTURAL MORTGAGE CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.100 Audit of the risk-based capital stress test. You must have a qualified, independent external auditor review...

  13. Mindfulness-based stress reduction: an intervention to enhance the effectiveness of nurses' coping with work-related stress.

    Science.gov (United States)

    Smith, Sarah A

    2014-06-01

    This critical literature review explored the current state of the science regarding mindfulness-based stress reduction (MBSR) as a potential intervention to improve the ability of nurses to effectively cope with stress. Literature sources include searches from EBSCOhost, Gale PowerSearch, ProQuest, PubMed Medline, Google Scholar, Online Journal of Issues in Nursing, and reference lists from relevant articles. Empirical evidence regarding utilizing MBSR with nurses and other healthcare professionals suggests several positive benefits including decreased stress, burnout, and anxiety; and increased empathy, focus, and mood. Nurse use of MBSR may be a key intervention to help improve nurses' ability to cope with stress and ultimately improve the quality of patient care provided. © 2014 NANDA International, Inc.

  14. The Relationship Between Aviators' Home-Based Stress To Work Stress and Self- Perceived Performance

    National Research Council Canada - National Science Library

    Fiedler, Edna

    2000-01-01

    .... Despite the importance placed on the family as a source of social support, there have been few systematic studies of the relationships between pilot family life, workplace stress, and performance...

  15. Effects of mindfulness-based stress reduction on perceived stress and psychological health in patients with tension headache.

    Science.gov (United States)

    Omidi, Abdollah; Zargar, Fatemeh

    2015-11-01

    Programs for improving health status of patients with illness related to pain, such as headache, are often still in their infancy. Mindfulness-based stress reduction (MBSR) is a new psychotherapy that appears to be effective in treating chronic pain and stress. This study evaluated efficacy of MBSR in treatment of perceived stress and mental health of client who has tension headache. This study is a randomized clinical trial. Sixty patients with tension type headache according to the International Headache Classification Subcommittee were randomly assigned to the Treatment As Usual (TAU) group or experimental group (MBSR). The MBSR group received eight weekly classmates with 12-min sessions. The sessions were based on MBSR protocol. The Brief Symptom Inventory (BSI) and Perceived Stress Scale (PSS) were administered in the pre- and posttreatment period and at 3 months follow-up for both the groups. The mean of total score of the BSI (global severity index; GSI) in MBSR group was 1.63 ± 0.56 before the intervention that was significantly reduced to 0.73 ± 0.46 and 0.93 ± 0.34 after the intervention and at the follow-up sessions, respectively (P stress in comparison with the control group at posttest evaluation. The mean of perceived stress before the intervention was 16.96 ± 2.53 and was changed to 12.7 ± 2.69 and 13.5 ± 2.33 after the intervention and at the follow-up sessions, respectively (P stress in the TAU group at pretest was 15.9 ± 2.86 and that was changed to 16.13 ± 2.44 and 15.76 ± 2.22 at posttest and follow-up, respectively (P stress and improve general mental health in patients with tension headache.

  16. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  17. Complete Decomposition of Li 2 CO 3 in Li–O 2 Batteries Using Ir/B 4 C as Noncarbon-Based Oxygen Electrode

    Energy Technology Data Exchange (ETDEWEB)

    Song, Shidong; Xu, Wu; Zheng, Jianming; Luo, Langli; Engelhard, Mark H.; Bowden, Mark E.; Liu, Bin; Wang, Chong-Min; Zhang, Ji-Guang

    2017-02-10

    Incomplete decomposition of Li2CO3 during charge process is a critical barrier for rechargeable Li-O2 batteries. Here we report complete decomposition of Li2CO3 in Li-O2 batteries using ultrafine iridium-decorated boron carbide (Ir/B4C) nanocomposite as oxygen electrode. The systematic investigation on charging the Li2CO3 preloaded Ir/B4C electrode in an ether-based electrolyte demonstrates that Ir/B4C electrode can decompose Li2CO3 with an efficiency close to 100% at below 4.37 V. In contrast, the bare B4C without Ir electrocatalyst can only decompose 4.7% of preloaded Li2CO3. The reaction mechanism of Li2CO3 decomposition in the presence of Ir/B4C electrocatalyst has been further investigated. A Li-O2 battery using Ir/B4C as oxygen electrode material shows highly enhanced cycling stability than that using bare B4C oxygen electrode. These results clearly demonstrate that Ir/B4C is an effecitive oxygen electrode amterial to completely decompose Li2CO3 at relatively low charge voltages and is of significant importance in improving the cycle performanc of aprotic Li-O2 batteries.

  18. Instantaneous 3D EEG Signal Analysis Based on Empirical Mode Decomposition and the Hilbert–Huang Transform Applied to Depth of Anaesthesia

    Directory of Open Access Journals (Sweden)

    Mu-Tzu Shih

    2015-02-01

    Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore

  19. Application of Computational Methods Mm2 and Gussian for Studing Unimolecular Decomposition of Vinil Ethers based on the Mechanism of Hydrogen Bonding

    OpenAIRE

    Behnaz Shahrokh; Garnik N. Sargsyan; Arkadi B. Harutyunyan

    2012-01-01

    Investigations of the unimolecular decomposition of vinyl ethyl ether (VEE), vinyl propyl ether (VPE) and vinyl butyl ether (VBE) have shown that activation of the molecule of a ether results in formation of a cyclic construction - the transition state (TS), which may lead to the displacement of the thermodynamic equilibrium towards the reaction products. The TS is obtained by applying energy minimization relative to the ground state of an ether under the program MM2 when...

  20. Comparison of stress-based and strain-based creep failure criteria for severe accident analysis

    International Nuclear Information System (INIS)

    Chavez, S.A.; Kelly, D.L.; Witt, R.J.; Stirn, D.P.

    1995-01-01

    We conducted a parametic analysis of stress-based and strain-based creep failure criteria to determine if there is a significant difference between the two criteria for SA533B vessel steel under severe accident conditions. Parametric variables include debris composition, system pressure, and creep strain histories derived from different testing programs and mathematically fit, with and without tertiary creep. Results indicate significant differences between the two criteria. Stress gradient plays an important role in determining which criterion will predict failure first. Creep failure was not very sensitive to different creep strain histories, except near the transition temperature of the vessel steel (900K to 1000K). Statistical analyses of creep failure data of four independent sources indicate that these data may be pooled, with a spline point at 1000K. We found the Manson-Haferd parameter to have better failure predictive capability than the Larson-Miller parameter for the data studied. (orig.)

  1. Evaluation of a Web-Based Holistic Stress Reduction Pilot Program Among Nurse-Midwives.

    Science.gov (United States)

    Wright, Erin M

    2018-06-01

    Work-related stress among midwives results in secondary traumatic stress, posttraumatic stress disorder, and job attrition. The purpose of this pilot project was to evaluate the effectiveness of a holistic, web-based program using holistic modalities for stress reduction and improved coping among certified nurse-midwives. A convenience sample of 10 midwives participated in a web-based holistic stress reduction intervention using yoga, mindfulness-based stress reduction, and meditation for four days each week over 4 weeks. Participants completed pre- and postintervention questionnaires (Perceived Stress Scale [PSS] and the Coping Self-Efficacy Scale [CSES]) for evaluation of effectiveness. The PSS means showed improvement in midwives' stress (16.4-12.3). The CSES means showed improvement in coping (174.8-214.5). Improvement was shown in each subscale of the CSES ("uses problem-focused coping": 19.2%; "stops unpleasant thoughts and emotions": 20.3%; and "gets support from family and friends": 16.6%). Findings suggest the potential for stress reduction and improved coping skills after using holistic techniques in a web-based format within a cohort of nurse-midwives. Further research of web-based, holistic intervention for stress reduction among midwives is warranted.

  2. Development of a Faith-Based Stress Management Intervention in a Rural African American Community.

    Science.gov (United States)

    Bryant, Keneshia; Moore, Todd; Willis, Nathaniel; Hadden, Kristie

    2015-01-01

    Faith-based mental health interventions developed and implemented using a community-based participatory research (CBPR) approach hold promise for reaching rural African Americans and addressing health disparities. To describe the development, challenges, and lessons learned from the Trinity Life Management, a faith-based stress management intervention in a rural African American faith community. The researchers used a CBPR approach by partnering with the African American faith community to develop a stress management intervention. Development strategies include working with key informants, focus groups, and a community advisory board (CAB). The community identified the key concepts that should be included in a stress management intervention. The faith-based "Trinity Life Management" stress management intervention was developed collaboratively by a CAB and an academic research team. The intervention includes stress management techniques that incorporate Biblical principles and information about the stress-distress-depression continuum.

  3. Stress !!!

    NARCIS (Netherlands)

    Fledderus, M.

    2012-01-01

    Twee op de vijf UT-studenten hebben last van ernstige studiestress, zo erg zelfs dat het ze in hun privéleven belemmert. Die cijfers komen overeen met het landelijk beeld van stress onder studenten. Samen met 14 andere universiteits- en hogeschoolbladen enquêteerde UT Nieuws bijna 5500 studenten.

  4. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  5. Monitoring based maintenance utilizing actual stress sensory technology

    Science.gov (United States)

    Sumitro, Sunaryo; Kurokawa, Shoji; Shimano, Keiji; Wang, Ming L.

    2005-06-01

    In recent years, many infrastructures have been deteriorating. In order to maintain sustainability of those infrastructures which have significant influence on social lifelines, economical and rational maintenance management should be carried out to evaluate the life cycle cost (LCC). The development of structural health monitoring systems, such as deriving evaluation techniques for the field structural condition of existing structures and identification techniques for the significant engineering properties of new structures, can be considered as the first step in resolving the above problem. New innovative evaluation methods need to be devised to identify the deterioration of infrastructures, e.g. steel tendons, cables in cable-stayed bridges and strands embedded in pre- or post-tensioned concrete structures. One of the possible solutions that show 'AtoE' characteristics, i.e., (a)ccuracy, (b)enefit, (c)ompendiousness, (d)urability and (e)ase of operation, elasto-magnetic (EM) actual stress sensory technology utilizing the sensitivity of incremental magnetic permeability to stress change, has been developed. Numerous verification tests on various steel materials have been conducted. By comparing with load cell, strain gage and other sensory technology measurement results, the actual stresses of steel tendons in a pre-stressed concrete structure at the following stages have been thoroughly investigated: (i) pre-stress change due to set-loss (anchorage slippage) at the tendon fixation stage; (ii) pre-stress change due to the tendon relaxation stage; (iii) concrete creep and shrinkage at the long term pre-stressing stage; (iv) pre-stress change in the cyclic fatigue loading stage; and (v) pre-stress change due to the re-pre-stress setting stage. As the result of this testing, it is confirmed that EM sensory technology enables one to measure actual stress in steel wire, strands and steel bars precisely without destroying the polyethylene covering sheath and enables

  6. Erbium hydride decomposition kinetics.

    Energy Technology Data Exchange (ETDEWEB)

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  7. Art of spin decomposition

    International Nuclear Information System (INIS)

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-01-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  8. Web-Based and Mobile Stress Management Intervention for Employees: A Randomized Controlled Trial

    OpenAIRE

    Heber, Elena; Lehr, Dirk; Ebert, David Daniel; Berking, Matthias; Riper, Heleen

    2016-01-01

    Background: Work-related stress is highly prevalent among employees and is associated with adverse mental health consequences. Web-based interventions offer the opportunity to deliver effective solutions on a large scale; however, the evidence is limited and the results conflicting. Objective: This randomized controlled trial evaluated the efficacy of guided Web-and mobile-based stress management training for employees. Methods: A total of 264 employees with elevated symptoms of stress (Perce...

  9. Acoustic Emission Based Surveillance System for Prediction of Stress Fractures

    Science.gov (United States)

    2007-09-01

    aging are susceptible to such fractures in contexts of osteoporosis, diabetes, cerebral palsy, fibrous dysplasia and osteogenesis imperfecta . This...disease, or, healthy people who have excessive exercise regimes (soldiers and athletes) experience these fractures [2]. Stress fractures interrupt

  10. Evaluation of Stress Parameters Based on Heart Rate Variability Measurements

    OpenAIRE

    Uysal, Fatma; Tokmakçı, Mahmut

    2018-01-01

    In this study, heart rate variabilitymeasurements and analysis was carried with help of the ECG recordings to showhow autonom nervous system activity changes. So as to evaluate the parametersrelated to stress of the study, the situation of relaxation, Stroop color/wordtest, mental test and auditory stimulus that would stress someone out wereapplied to six volunteer participants in a laboratory environment. Being takentotally seven minutes ECG recording and made analysis in time and frequencyd...

  11. The Influence of Test-Based Accountability Policies on Early Elementary Teachers: School Climate, Environmental Stress, and Teacher Stress

    Science.gov (United States)

    Saeki, Elina; Segool, Natasha; Pendergast, Laura; von der Embse, Nathaniel

    2018-01-01

    This study examined the potential influence of test-based accountability policies on school environment and teacher stress among early elementary teachers. Structural equation modeling of data from 541 kindergarten through second grade teachers across three states found that use of student performance on high-stakes tests to evaluate teachers…

  12. Smartphone-Based Self-Assessment of Stress in Healthy Adult Individuals: A Systematic Review.

    Science.gov (United States)

    Þórarinsdóttir, Helga; Kessing, Lars Vedel; Faurholt-Jepsen, Maria

    2017-02-13

    Stress is a common experience in today's society. Smartphone ownership is widespread, and smartphones can be used to monitor health and well-being. Smartphone-based self-assessment of stress can be done in naturalistic settings and may potentially reflect real-time stress level. The objectives of this systematic review were to evaluate (1) the use of smartphones to measure self-assessed stress in healthy adult individuals, (2) the validity of smartphone-based self-assessed stress compared with validated stress scales, and (3) the association between smartphone-based self-assessed stress and smartphone generated objective data. A systematic review of the scientific literature was reported and conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. The scientific databases PubMed, PsycINFO, Embase, IEEE, and ACM were searched and supplemented by a hand search of reference lists. The databases were searched for original studies involving healthy individuals older than 18 years, measuring self-assessed stress using smartphones. A total of 35 published articles comprising 1464 individuals were included for review. According to the objectives, (1) study designs were heterogeneous, and smartphone-based self-assessed stress was measured using various methods (e.g., dichotomized questions on stress, yes or no; Likert scales on stress; and questionnaires); (2) the validity of smartphone-based self-assessed stress compared with validated stress scales was investigated in 3 studies, and of these, only 1 study found a moderate statistically significant positive correlation (r=.4; P<.05); and (3) in exploratory analyses, smartphone-based self-assessed stress was found to correlate with some of the reported smartphone generated objective data, including voice features and data on activity and phone usage. Smartphones are being used to measure self-assessed stress in different contexts. The evidence of the validity of

  13. Decomposition Theory in the Teaching of Elementary Linear Algebra.

    Science.gov (United States)

    London, R. R.; Rogosinski, H. P.

    1990-01-01

    Described is a decomposition theory from which the Cayley-Hamilton theorem, the diagonalizability of complex square matrices, and functional calculus can be developed. The theory and its applications are based on elementary polynomial algebra. (KR)

  14. Massively Parallel Polar Decomposition on Distributed-Memory Systems

    KAUST Repository

    Ltaief, Hatem; Sukkari, Dalal E.; Esposito, Aniello; Nakatsukasa, Yuji; Keyes, David E.

    2018-01-01

    We present a high-performance implementation of the Polar Decomposition (PD) on distributed-memory systems. Building upon on the QR-based Dynamically Weighted Halley (QDWH) algorithm, the key idea lies in finding the best rational approximation

  15. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  16. Domain decomposition method for solving the neutron diffusion equation

    International Nuclear Information System (INIS)

    Coulomb, F.

    1989-03-01

    The aim of this work is to study methods for solving the neutron diffusion equation; we are interested in methods based on a classical finite element discretization and well suited for use on parallel computers. Domain decomposition methods seem to answer this preoccupation. This study deals with a decomposition of the domain. A theoretical study is carried out for Lagrange finite elements and some examples are given; in the case of mixed dual finite elements, the study is based on examples [fr

  17. A Comparison of Students' Perceptions of Stress in Parallel Problem-Based and Lecture-Based Curricula.

    Science.gov (United States)

    Wardley, C Sonia; Applegate, E Brooks; Almaleki, A Deyab; Van Rhee, James A

    2016-03-01

    A 6-year longitudinal study was conducted to compare the perceived stress experienced during a 2-year master's physician assistant program by 5 cohorts of students enrolled in either problem-based learning (PBL) or lecture-based learning (LBL) curricular tracks. The association of perceived stress with academic achievement was also assessed. Students rated their stress levels on visual analog scales in relation to family obligations, financial concerns, schoolwork, and relocation and overall on 6 occasions throughout the program. A mixed model analysis of variance examined the students' perceived level of stress by curriculum and over time. Regression analysis further examined school work-related stress after controlling for other stressors and possible lag effect of stress from the previous time point. Students reported that overall stress increased throughout the didactic year followed by a decline in the clinical year with statistically significant curricular (PBL versus LBL) and time differences. PBL students also reported significantly more stress resulting from school work than LBL students at some time points. Moreover, when the other measured stressors and possible lag effects were controlled, significant differences between PBL and LBL students' perceived stress related to school work persisted at the 8- and 12-month measurement points. Increased stress in both curricula was associated with higher achievement in overall and individual organ system examination scores. Physician assistant programs that embrace a PBL pedagogy to prepare students to think clinically may need to provide students with additional support through the didactic curriculum.

  18. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  19. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  20. Stress evaluation of metallic material under steady state based on nonlinear critically refracted longitudinal wave

    Science.gov (United States)

    Mao, Hanling; Zhang, Yuhua; Mao, Hanying; Li, Xinxin; Huang, Zhenfeng

    2018-06-01

    This paper presents the study of applying the nonlinear ultrasonic wave to evaluate the stress state of metallic materials under steady state. The pre-stress loading method is applied to guarantee components with steady stress. Three kinds of nonlinear ultrasonic experiments based on critically refracted longitudinal wave are conducted on components which the critically refracted longitudinal wave propagates along x, x1 and x2 direction. Experimental results indicate the second and third order relative nonlinear coefficients monotonically increase with stress, and the normalized relationship is consistent with simplified dislocation models, which indicates the experimental result is logical. The combined ultrasonic nonlinear parameter is proposed, and three stress evaluation models at x direction are established based on three ultrasonic nonlinear parameters, which the estimation error is below 5%. Then two stress detection models at x1 and x2 direction are built based on combined ultrasonic nonlinear parameter, the stress synthesis method is applied to calculate the magnitude and direction of principal stress. The results show the prediction error is within 5% and the angle deviation is within 1.5°. Therefore the nonlinear ultrasonic technique based on LCR wave could be applied to nondestructively evaluate the stress of metallic materials under steady state which the magnitude and direction are included.

  1. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  2. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  3. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  4. Simulation of stress-modulated magnetization precession frequency in Heusler-based spin torque oscillator

    International Nuclear Information System (INIS)

    Huang, Houbing; Zhao, Congpeng; Ma, Xingqiao

    2017-01-01

    We investigated stress-modulated magnetization precession frequency in Heusler-based spin transfer torque oscillator by combining micromagnetic simulations with phase field microelasticity theory, by encapsulating the magnetic tunnel junction into multilayers structures. We proposed a novel method of using an external stress to control the magnetization precession in spin torque oscillator instead of an external magnetic field. The stress-modulated magnetization precession frequency can be linearly modulated by externally applied uniaxial in-plane stress, with a tunable range 4.4–7.0 GHz under the stress of 10 MPa. By comparison, the out-of-plane stress imposes negligible influence on the precession frequency due to the large out-of-plane demagnetization field. The results offer new inspiration to the design of spin torque oscillator devices that simultaneously process high frequency, narrow output band, and tunable over a wide range of frequencies via external stress. - Highlights: • We proposed stress-modulated magnetization precession in spin torque oscillator. • The magnetization precession frequency can be linearly modulated by in-plane stress. • The stress also can widen the magnetization frequency range 4.4–7.0 GHz. • The stress-modulated oscillation frequency can simplify STO devices.

  5. Simulation of stress-modulated magnetization precession frequency in Heusler-based spin torque oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Houbing, E-mail: hbhuang@ustb.edu.cn; Zhao, Congpeng; Ma, Xingqiao, E-mail: xqma@sas.ustb.edu.cn

    2017-03-15

    We investigated stress-modulated magnetization precession frequency in Heusler-based spin transfer torque oscillator by combining micromagnetic simulations with phase field microelasticity theory, by encapsulating the magnetic tunnel junction into multilayers structures. We proposed a novel method of using an external stress to control the magnetization precession in spin torque oscillator instead of an external magnetic field. The stress-modulated magnetization precession frequency can be linearly modulated by externally applied uniaxial in-plane stress, with a tunable range 4.4–7.0 GHz under the stress of 10 MPa. By comparison, the out-of-plane stress imposes negligible influence on the precession frequency due to the large out-of-plane demagnetization field. The results offer new inspiration to the design of spin torque oscillator devices that simultaneously process high frequency, narrow output band, and tunable over a wide range of frequencies via external stress. - Highlights: • We proposed stress-modulated magnetization precession in spin torque oscillator. • The magnetization precession frequency can be linearly modulated by in-plane stress. • The stress also can widen the magnetization frequency range 4.4–7.0 GHz. • The stress-modulated oscillation frequency can simplify STO devices.

  6. Stress.

    Science.gov (United States)

    Chambers, David W

    2008-01-01

    We all experience stress as a regular, and sometimes damaging and sometimes useful, part of our daily lives. In our normal ups and downs, we have our share of exhaustion, despondency, and outrage--matched with their corresponding positive moods. But burnout and workaholism are different. They are chronic, dysfunctional, self-reinforcing, life-shortening habits. Dentists, nurses, teachers, ministers, social workers, and entertainers are especially susceptible to burnout; not because they are hard-working professionals (they tend to be), but because they are caring perfectionists who share control for the success of what they do with others and perform under the scrutiny of their colleagues (they tend to). Workaholics are also trapped in self-sealing cycles, but the elements are ever-receding visions of control and using constant activity as a barrier against facing reality. This essay explores the symptoms, mechanisms, causes, and successful coping strategies for burnout and workaholism. It also takes a look at the general stress response on the physiological level and at some of the damage American society inflicts on itself.

  7. Separation of hepatic iron and fat by dual-source dual-energy computed tomography based on material decomposition: an animal study.

    Science.gov (United States)

    Ma, Jing; Song, Zhi-Qiang; Yan, Fu-Hua

    2014-01-01

    To explore the feasibility of dual-source dual-energy computed tomography (DSDECT) for hepatic iron and fat separation in vivo. All of the procedures in this study were approved by the Research Animal Resource Center of Shanghai Ruijin Hospital. Sixty rats that underwent DECT scanning were divided into the normal group, fatty liver group, liver iron group, and coexisting liver iron and fat group, according to Prussian blue and HE staining. The data for each group were reconstructed and post-processed by an iron-specific, three-material decomposition algorithm. The iron enhancement value and the virtual non-iron contrast value, which indicated overloaded liver iron and residual liver tissue, respectively, were measured. Spearman's correlation and one-way analysis of variance (ANOVA) were performed, respectively, to analyze statistically the correlations with the histopathological results and differences among groups. The iron enhancement values were positively correlated with the iron pathology grading (r = 0.729, pVNC) values were negatively correlated with the fat pathology grading (r = -0.642,pVNC values (F = 25.308,pVNC values were only observed between the fat-present and fat-absent groups. Separation of hepatic iron and fat by dual energy material decomposition in vivo was feasible, even when they coexisted.

  8. Income-related health inequality of migrant workers in China and its decomposition: An analysis based on the 2012 China Labor-force Dynamics Survey data.

    Science.gov (United States)

    Shao, Cenyi; Meng, Xuehui; Cui, Shichen; Wang, Jingru; Li, Chengcheng

    2016-10-01

    Although migrant workers are a vulnerable group in China, they demonstrably contribute to the country's economic growth and prosperity. This study aimed to describe and assess the inequality of migrant worker health in China and its association with socioeconomic determinants. The data utilized in this study were obtained from the 2012 China Labor-force Dynamics Survey conducted in 29 Chinese provinces. This study converted the self-rated health of these migrant workers into a general cardinal ill-health score. Determinants associated with migrant worker health included but were not limited to age, marital status, income, and education, among other factors. Concentration index, concentration curve, and decomposition of the concentration index were employed to measure socioeconomic inequality in migrant workers' health. Prorich inequality was found in the health of migrant workers. The concentration index was -0.0866, as a score indicator of ill health. Decomposition of the concentration index revealed that the factors most contributing to the observed inequality were income, followed by gender, age, marital status, and smoking history. It is generally known that there is an unequal socioeconomic distribution of migrant worker health in China. In order to reduce the health inequality, the government should make a substantial effort to strengthen policy implementation in improving the income distribution for vulnerable groups. After this investigation, it is apparent that the findings we have made warrant further investigation. Copyright © 2016. Published by Elsevier Taiwan LLC.

  9. Deciphering Stress State of Seismogenic Faults in Oklahoma and Kansas Based on High-resolution Stress Maps

    Science.gov (United States)

    Qin, Y.; Chen, X.; Haffener, J.; Trugman, D. T.; Carpenter, B.; Reches, Z.

    2017-12-01

    Induced seismicity in Oklahoma and Kansas delineates clear fault trends. It is assumed that fluid injection reactivates faults which are optimally oriented relative to the regional tectonic stress field. We utilized recently improved earthquake locations and more complete focal mechanism catalogs to quantitatively analyze the stress state of seismogenic faults with high-resolution stress maps. The steps of analysis are: (1) Mapping the faults by clustering seismicity using a nearest-neighbor approach, manually picking the fault in each cluster and calculating the fault geometry using principal component analysis. (2) Running a stress inversion with 0.2° grid spacing to produce an in-situ stress map. (3) The fault stress state is determined from fault geometry and a 3D Mohr circle. The parameter `understress' is calculated to quantify the criticalness of these faults. If it approaches 0, the fault is critically stressed; while understress=1 means there is no shear stress on the fault. Our results indicate that most of the active faults have a planar shape (planarity>0.8), and dip steeply (dip>70°). The fault trends are distributed mainly in conjugate set ranges of [50°,70°] and [100°,120°]. More importantly, these conjugate trends are consistent with mapped basement fractures in southern Oklahoma, suggesting similar basement features from regional tectonics. The fault length data shows a loglinear relationship with the maximum earthquake magnitude with an expected maximum magnitude range from 3.2 to 4.4 for most seismogenic faults. Based on 3D local Mohr circle, we find that 61% of the faults have low understress (0.5) are located within highest-rate injection zones and therefore are likely to be influenced by high pore pressure. The faults that hosted the largest earthquakes, M5.7 Prague and M5.8 Pawnee are critically stressed (understress 0.2). These differences may help in understanding earthquake sequences, for example, the predominantly aftershock

  10. Reducing composite restoration polymerization shrinkage stress through resin modified glass-ionomer based adhesives.

    Science.gov (United States)

    Naoum, S J; Mutzelburg, P R; Shumack, T G; Thode, Djg; Martin, F E; Ellakwa, A E

    2015-12-01

    The aim of this study was to determine whether employing resin modified glass-ionomer based adhesives can reduce polymerization contraction stress generated at the interface of restorative composite adhesive systems. Five resin based adhesives (G Bond, Optibond-All-in-One, Optibond-Solo, Optibond-XTR and Scotchbond-Universal) and two resin modified glass-ionomer based adhesives (Riva Bond-LC, Fuji Bond-LC) were analysed. Each adhesive was applied to bond restorative composite Filtek-Z250 to opposing acrylic rods secured within a universal testing machine. Stress developed at the interface of each adhesive-restorative composite system (n = 5) was calculated at 5-minute intervals over 6 hours. The resin based adhesive-restorative composite systems (RBA-RCS) demonstrated similar interface stress profiles over 6 hours; initial rapid contraction stress development (0-300 seconds) followed by continued contraction stress development ≤0.02MPa/s (300 seconds - 6 hours). The interface stress profile of the resin modified glass-ionomer based adhesive-restorative composite systems (RMGIBA-RCS) differed substantially to the RBA-RCS in several ways. Firstly, during 0-300 seconds the rate of contraction stress development at the interface of the RMGIBA-RCS was significantly (p adhesives can significantly reduce the magnitude and rate of polymerization contraction stress developed at the interface of adhesive-restorative composite systems. © 2015 Australian Dental Association.

  11. Effects of mindfulness-based stress reduction on perceived stress and psychological health in patients with tension headache

    Directory of Open Access Journals (Sweden)

    Abdollah Omidi

    2015-01-01

    Full Text Available Background: Programs for improving health status of patients with illness related to pain, such as headache, are often still in their infancy. Mindfulness-based stress reduction (MBSR is a new psychotherapy that appears to be effective in treating chronic pain and stress. This study evaluated efficacy of MBSR in treatment of perceived stress and mental health of client who has tension headache. Materials and Methods: This study is a randomized clinical trial. Sixty patients with tension type headache according to the International Headache Classification Subcommittee were randomly assigned to the Treatment As Usual (TAU group or experimental group (MBSR. The MBSR group received eight weekly classmates with 12-min sessions. The sessions were based on MBSR protocol. The Brief Symptom Inventory (BSI and Perceived Stress Scale (PSS were administered in the pre- and posttreatment period and at 3 months follow-up for both the groups. Results: The mean of total score of the BSI (global severity index; GSI in MBSR group was 1.63 ± 0.56 before the intervention that was significantly reduced to 0.73 ± 0.46 and 0.93 ± 0.34 after the intervention and at the follow-up sessions, respectively (P < 0.001. In addition, the MBSR group showed lower scores in perceived stress in comparison with the control group at posttest evaluation. The mean of perceived stress before the intervention was 16.96 ± 2.53 and was changed to 12.7 ± 2.69 and 13.5 ± 2.33 after the intervention and at the follow-up sessions, respectively (P < 0.001. On the other hand, the mean of GSI in the TAU group was 1.77 ± 0.50 at pretest that was significantly reduced to 1.59 ± 0.52 and 1.78 ± 0.47 at posttest and follow-up, respectively (P < 0.001. Also, the mean of perceived stress in the TAU group at pretest was 15.9 ± 2.86 and that was changed to 16.13 ± 2.44 and 15.76 ± 2.22 at posttest and follow-up, respectively (P < 0.001. Conclusion: MBSR could reduce stress and improve

  12. Using combinatorial problem decomposition for optimizing plutonium inventory management

    International Nuclear Information System (INIS)

    Niquil, Y.; Gondran, M.; Voskanian, A.; Paris-11 Univ., 91 - Orsay

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author)

  13. Game-based peripheral biofeedback for stress assessment in children.

    Science.gov (United States)

    Pop-Jordanova, Nada; Gucev, Zoran

    2010-06-01

    Peripheral biofeedback is considered to be an efficient method for assessment and stress mitigation in children. The aim of the present study was to assess the levels of stress and stress mitigation in healthy school children (HSC), in children with cystic fibrosis (CF), general anxiety (GA) and attention-deficit-hyperactivity disorder (ADHD). Each investigated group (HSC, CF, GA, ADHD) consisted of 30 school-aged children from both sexes. Psychological characteristics were evaluated on Eysenck Personality Questionnaire (EPQ). The lie scale was used to determine participant honesty. Four biofeedback games using a pulls detector were applied for assessment of the stress levels as well as to evaluate ability to relax. EPQ found more psychopathological traits (P Magic blocks score was significantly different in relaxation levels between control and CF children (P game Canal was significantly different in relaxation levels between healthy controls and all other groups, but no changes in pulls, as a relaxation measure, were found during the game. The CF group had much more commissions stemming from impulsivity (t= 5.71, P < 0.01), while the GA and ADHD children had more inattention omissions (P < 0.05). Strong negative correlation between age and pulls (r= 0.49, P= 0.003) and strong negative correlation between age and omissions (r=-0.86, P= 0.029) were found among all groups analyzed. The ability to learn stress mediation is correlated with age. All three groups of children had significantly lower relaxation levels when compared to healthy controls. Relaxation was more difficult for children with GA or ADHD, and easier for children with CF.

  14. Domain decomposition multigrid for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Yair

    1997-01-01

    A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.

  15. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  16. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  17. NRSA enzyme decomposition model data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...

  18. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  19. Racism and Psychological and Emotional Injury: Recognizing and Assessing Race-Based Traumatic Stress

    Science.gov (United States)

    Carter, Robert T.

    2007-01-01

    The purpose of this article is to discuss the psychological and emotional effects of racism on people of Color. Psychological models and research on racism, discrimination, stress, and trauma will be integrated to promote a model to be used to understand, recognize, and assess race-based traumatic stress to aid counseling and psychological…

  20. Investigating role stress in frontline bank employees: A cluster based approach

    Directory of Open Access Journals (Sweden)

    Arti Devi

    2013-09-01

    Full Text Available An effective role stress management programme would benefit from a segmentation of employees based on their experience of role stressors. This study explores role stressor based segments of frontline bank employees towards providing a framework for designing such a programme. Cluster analysis on a random sample of 501 frontline employees of commercial banks in Jammu and Kashmir (India revealed three distinct segments – “overloaded employees”, “unclear employees”, and “underutilised employees”, based on their experience of role stressors. The findings suggest a customised approach to role stress management, with the role stress management programme designed to address cluster specific needs.

  1. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  2. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Xike Zhang

    2018-05-01

    Full Text Available Daily land surface temperature (LST forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD coupled with Machine Learning (ML algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs and a single residue item. Then, the Partial Autocorrelation Function (PACF is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE, Mean Absolute Error (MAE, Mean Absolute Percentage Error (MAPE, Root Mean Square Error (RMSE, Pearson Correlation Coefficient (CC and Nash-Sutcliffe Coefficient of Efficiency (NSCE. To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN, LSTM and Empirical Mode Decomposition (EMD coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other

  3. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five

  4. Inhibitory Effect of Dissolved Silica on the H2O2 Decomposition by Iron(III) and Manganese(IV) Oxides: Implications for H2O2-based In Situ Chemical Oxidation

    Science.gov (United States)

    Pham, Anh Le-Tuan; Doyle, Fiona M.; Sedlak, David L.

    2011-01-01

    The decomposition of H2O2 on iron minerals can generate •OH, a strong oxidant that can transform a wide range of contaminants. This reaction is critical to In Situ Chemical Oxidation (ISCO) processes used for soil and groundwater remediation, as well as advanced oxidation processes employed in waste treatment systems. The presence of dissolved silica at concentrations comparable to those encountered in natural waters decreases the reactivity of iron minerals toward H2O2, because silica adsorbs onto the surface of iron minerals and alters catalytic sites. At circumneutral pH values, goethite, amorphous iron oxide, hematite, iron-coated sand and montmorillonite that were pre-equilibrated with 0.05 – 1.5 mM SiO2 were significantly less reactive toward H2O2 decomposition than their original counterparts, with the H2O2 loss rates inversely proportional to the SiO2 concentration. In the goethite/H2O2 system, the overall •OH yield, defined as the percentage of decomposed H2O2 producing •OH, was almost halved in the presence of 1.5 mM SiO2. Dissolved SiO2 also slows the H2O2 decomposition on manganese(IV) oxide. The presence of dissolved SiO2 results in greater persistence of H2O2 in groundwater, lower H2O2 utilization efficiency and should be considered in the design of H2O2-based treatment systems. PMID:22129132

  5. Differential effects of stress-induced cortisol responses on recollection and familiarity-based recognition memory.

    Science.gov (United States)

    McCullough, Andrew M; Ritchey, Maureen; Ranganath, Charan; Yonelinas, Andrew

    2015-09-01

    Stress-induced changes in cortisol can impact memory in various ways. However, the precise relationship between cortisol and recognition memory is still poorly understood. For instance, there is reason to believe that stress could differentially affect recollection-based memory, which depends on the hippocampus, and familiarity-based recognition, which can be supported by neocortical areas alone. Accordingly, in the current study we examined the effects of stress-related changes in cortisol on the processes underlying recognition memory. Stress was induced with a cold-pressor test after incidental encoding of emotional and neutral pictures, and recollection and familiarity-based recognition memory were measured one day later. The relationship between stress-induced cortisol responses and recollection was non-monotonic, such that subjects with moderate stress-related increases in cortisol had the highest levels of recollection. In contrast, stress-related cortisol responses were linearly related to increases in familiarity. In addition, measures of cortisol taken at the onset of the experiment showed that individuals with higher levels of pre-learning cortisol had lower levels of both recollection and familiarity. The results are consistent with the proposition that hippocampal-dependent memory processes such as recollection function optimally under moderate levels of stress, whereas more cortically-based processes such as familiarity are enhanced even with higher levels of stress. These results indicate that whether post-encoding stress improves or disrupts recognition memory depends on the specific memory process examined as well as the magnitude of the stress-induced cortisol response. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. STRESS TESTS FOR VIDEOSTREAMING SERVICES BASED ON RTSP PROTOCOL

    Directory of Open Access Journals (Sweden)

    Gabriel Elías Chanchí Golondrino

    2015-11-01

    Full Text Available Video-streaming is a technology with major implications these days in such diverse contexts as education, health and the business sector; all of this regarding the ease it provides for remote access to live or recorded media content, allowing communication regardless of geographic location. One standard protocol that enables implementation of this technology is real time streaming protocol, or RTSP. However, since most application servers and Internet services are supported on HTTP requests, very little research has been done on generating tools for carrying out stress tests on streaming servers. This paper presents a stress measuring tool called Hermes, developed in Python, which allows calculation of response times for establishing RTSP connections to streaming servers, as well as obtaining RAM memory consumption and CPU usage rate data from these servers. Hermes was deployed in a video-streaming environment where stress testing was carried out on the LIVE555 server, using calls in the background to VLC and OpenRTSP open source clients. 

  7. Stress relaxation analysis of single chondrocytes using porohyperelastic model based on AFM experiments

    Directory of Open Access Journals (Sweden)

    Trung Dung Nguyen

    2014-01-01

    Full Text Available Based on atomic force microscopytechnique, we found that the chondrocytes exhibits stress relaxation behavior. We explored the mechanism of this stress relaxation behavior and concluded that the intracellular fluid exuding out from the cells during deformation plays the most important role in the stress relaxation. We applied the inverse finite element analysis technique to determine necessary material parameters for porohyperelastic (PHE model to simulate stress relaxation behavior as this model is proven capable of capturing the non-linear behavior and the fluid-solid interaction during the stress relaxation of the single chondrocytes. It is observed that PHE model can precisely capture the stress relaxation behavior of single chondrocytes and would be a suitable model for cell biomechanics.

  8. Texture-based segmentation with Gabor filters, wavelet and pyramid decompositions for extracting individual surface features from areal surface topography maps

    International Nuclear Information System (INIS)

    Senin, Nicola; Leach, Richard K; Pini, Stefano; Blunt, Liam A

    2015-01-01

    Areal topography segmentation plays a fundamental role in those surface metrology applications concerned with the characterisation of individual topography features. Typical scenarios include the dimensional inspection and verification of micro-structured surface features, and the identification and characterisation of localised defects and other random singularities. While morphological segmentation into hills or dales is the only partitioning operation currently endorsed by the ISO specification standards on surface texture metrology, many other approaches are possible, in particular adapted from the literature on digital image segmentation. In this work an original segmentation approach is introduced and discussed, where topography partitioning is driven by information collected through the application of texture characterisation transforms popular in digital image processing. Gabor filters, wavelets and pyramid decompositions are investigated and applied to a selected set of test cases. The behaviour, performance and limitations of the proposed approach are discussed from the viewpoint of the identification and extraction of individual surface topography features. (paper)

  9. Why Electricity Demand Is Highly Income-Elastic in Spain: A Cross-Country Comparison Based on an Index-Decomposition Analysis

    Directory of Open Access Journals (Sweden)

    Julián Pérez-García

    2017-03-01

    Full Text Available Since 1990, Spain has had one of the highest elasticities of electricity demand in the European Union. We provide an in-depth analysis into the causes of this high elasticity, and we examine how these same causes influence electricity demand in other European countries. To this end, we present an index-decomposition analysis of growth in electricity demand which allows us to identify three key factors in the relationship between gross domestic product (GDP and electricity demand: (i structural change; (ii GDP growth; and (iii intensity of electricity use. Our findings show that the main differences in electricity demand elasticities across countries and time are accounted for by the fast convergence in residential per capita electricity consumption. This convergence has almost concluded, and we expect the Spanish energy demand elasticity to converge to European standards in the near future.

  10. Life Cycle Building Carbon Emissions Assessment and Driving Factors Decomposition Analysis Based on LMDI—A Case Study of Wuhan City in China

    Directory of Open Access Journals (Sweden)

    Yuanyuan Gong

    2015-12-01

    Full Text Available Carbon emissions calculation at the sub-provincial level has issues in limited data and non-unified measurements. This paper calculated the life cycle energy consumption and carbon emissions of the building industry in Wuhan, China. The findings showed that the proportion of carbon emissions in the construction operation phase was the largest, followed by the carbon emissions of the indirect energy consumption and the construction material preparation phase. With the purpose of analyzing the contributors of the construction carbon emissions, this paper conducted decomposition analysis using Logarithmic Mean Divisia Index (LMDI. The results indicated that the increasing buidling area was the major driver of energy consumption and carbon emissions increase, followed by the behavior factor. Population growth and urbanization, to some extent, increased the carbon emissions as well. On the contrary, energy efficiency was the main inhibitory factor for reducing the carbon emissions. Policy implications in terms of low-carbon construction development were highlighted.

  11. Separation of hepatic iron and fat by dual-source dual-energy computed tomography based on material decomposition: an animal study.

    Directory of Open Access Journals (Sweden)

    Jing Ma

    Full Text Available OBJECTIVE: To explore the feasibility of dual-source dual-energy computed tomography (DSDECT for hepatic iron and fat separation in vivo. MATERIALS AND METHODS: All of the procedures in this study were approved by the Research Animal Resource Center of Shanghai Ruijin Hospital. Sixty rats that underwent DECT scanning were divided into the normal group, fatty liver group, liver iron group, and coexisting liver iron and fat group, according to Prussian blue and HE staining. The data for each group were reconstructed and post-processed by an iron-specific, three-material decomposition algorithm. The iron enhancement value and the virtual non-iron contrast value, which indicated overloaded liver iron and residual liver tissue, respectively, were measured. Spearman's correlation and one-way analysis of variance (ANOVA were performed, respectively, to analyze statistically the correlations with the histopathological results and differences among groups. RESULTS: The iron enhancement values were positively correlated with the iron pathology grading (r = 0.729, p<0.001. Virtual non-iron contrast (VNC values were negatively correlated with the fat pathology grading (r = -0.642,p<0.0001. Different groups showed significantly different iron enhancement values and VNC values (F = 25.308,p<0.001; F = 10.911, p<0.001, respectively. Among the groups, significant differences in iron enhancement values were only observed between the iron-present and iron-absent groups, and differences in VNC values were only observed between the fat-present and fat-absent groups. CONCLUSION: Separation of hepatic iron and fat by dual energy material decomposition in vivo was feasible, even when they coexisted.

  12. A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Mei-qun Jiang; Pei-liang Dai

    2006-01-01

    A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.

  13. Dual decomposition for parsing with non-projective head automata

    OpenAIRE

    Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander

    2010-01-01

    This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...

  14. Multidimensional Decomposition of the Sen Index: Some Further Thoughts

    OpenAIRE

    Stéphane Mussard; Kuan Xu

    2006-01-01

    Given the multiplicative decomposition of the Sen index into three commonly used poverty statistics – the poverty rate (poverty incidence), poverty gap ratio (poverty depth) and 1 plus the Gini index of poverty gap ratios of the poor (inequality of poverty) – the index becomes much easier to use and to interpret for economists, policy analysts and decision makers. Based on the recent findings on simultaneous subgroup and source decomposition of the Gini index, we examine possible further deco...

  15. Thermal decomposition and reaction of confined explosives

    International Nuclear Information System (INIS)

    Catalano, E.; McGuire, R.; Lee, E.; Wrenn, E.; Ornellas, D.; Walton, J.

    1976-01-01

    Some new experiments designed to accurately determine the time interval required to produce a reactive event in confined explosives subjected to temperatures which will cause decomposition are described. Geometry and boundary conditions were both well defined so that these experiments on the rapid thermal decomposition of HE are amenable to predictive modelling. Experiments have been carried out on TNT, TATB and on two plastic-bonded HMX-based high explosives, LX-04 and LX-10. When the results of these experiments are plotted as the logarithm of the time to explosion versus 1/T K (Arrhenius plot), the curves produced are remarkably linear. This is in contradiction to the results obtained by an iterative solution of the Laplace equation for a system with a first order rate heat source. Such calculations produce plots which display considerable curvature. The experiments have also shown that the time to explosion is strongly influenced by the void volume in the containment vessel. Results of the experiments with calculations based on the heat flow equations coupled with first-order models of chemical decomposition are compared. The comparisons demonstrate the need for a more realistic reaction model

  16. A pilot randomized trial teaching mindfulness-based stress reduction to traumatized youth in foster care.

    Science.gov (United States)

    Jee, Sandra H; Couderc, Jean-Philippe; Swanson, Dena; Gallegos, Autumn; Hilliard, Cammie; Blumkin, Aaron; Cunningham, Kendall; Heinert, Sara

    2015-08-01

    This article presents a pilot project implementing a mindfulness-based stress reduction program among traumatized youth in foster and kinship care over 10 weeks. Forty-two youth participated in this randomized controlled trial that used a mixed-methods (quantitative, qualitative, and physiologic) evaluation. Youth self-report measuring mental health problems, mindfulness, and stress were lower than anticipated, and the relatively short time-frame to teach these skills to traumatized youth may not have been sufficient to capture significant changes in stress as measured by electrocardiograms. Main themes from qualitative data included expressed competence in managing ongoing stress, enhanced self-awareness, and new strategies to manage stress. We share our experiences and recommendations for future research and practice, including focusing efforts on younger youth, and using community-based participatory research principles to promote engagement and co-learning. CLINICALTRIALS.GOV: Protocol Registration System ID NCT01708291. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Coupled stress-strain and electrical resistivity measurements on copper based shape memory single crystals

    Directory of Open Access Journals (Sweden)

    Gonzalez Cezar Henrique

    2004-01-01

    Full Text Available Recently, electrical resistivity (ER measurements have been done during some thermomechanical tests in copper based shape memory alloys (SMA's. In this work, single crystals of Cu-based SMA's have been studied at different temperatures to analyse the relationship between stress (s and ER changes as a function of the strain (e. A good consistency between ER change values is observed in different experiments: thermal martensitic transformation, stress induced martensitic transformation and stress induced reorientation of martensite variants. During stress induced martensitic transformation (superelastic behaviour and stress induced reorientation of martensite variants, a linear relationship is obtained between ER and strain as well as the absence of hys teresis. In conclusion, the present results show a direct evidence of martensite electrical resistivity anisotropy.

  18. Real interest parity decomposition

    Directory of Open Access Journals (Sweden)

    Alex Luiz Ferreira

    2009-09-01

    Full Text Available The aim of this paper is to investigate the general causes of real interest rate differentials (rids for a sample of emerging markets for the period of January 1996 to August 2007. To this end, two methods are applied. The first consists of breaking the variance of rids down into relative purchasing power pariety and uncovered interest rate parity and shows that inflation differentials are the main source of rids variation; while the second method breaks down the rids and nominal interest rate differentials (nids into nominal and real shocks. Bivariate autoregressive models are estimated under particular identification conditions, having been adequately treated for the identified structural breaks. Impulse response functions and error variance decomposition result in real shocks as being the likely cause of rids.O objetivo deste artigo é investigar as causas gerais dos diferenciais da taxa de juros real (rids para um conjunto de países emergentes, para o período de janeiro de 1996 a agosto de 2007. Para tanto, duas metodologias são aplicadas. A primeira consiste em decompor a variância dos rids entre a paridade do poder de compra relativa e a paridade de juros a descoberto e mostra que os diferenciais de inflação são a fonte predominante da variabilidade dos rids; a segunda decompõe os rids e os diferenciais de juros nominais (nids em choques nominais e reais. Sob certas condições de identificação, modelos autorregressivos bivariados são estimados com tratamento adequado para as quebras estruturais identificadas e as funções de resposta ao impulso e a decomposição da variância dos erros de previsão são obtidas, resultando em evidências favoráveis a que os choques reais são a causa mais provável dos rids.

  19. For whom does mindfulness-based stress reduction work? : An examination of moderating effects of personality

    NARCIS (Netherlands)

    Nyklicek, I.; Irrmischer, M.

    2017-01-01

    The aim of the present study was to examine potentially moderating effects of personality characteristics regarding changes in anxious and depressed mood associated with Mindfulness-Based Stress Reduction (MBSR), controlling forsociodemographicactors.Meditation-naïvearticipants from the general

  20. Calculation of crack stress density of cement base materials

    Directory of Open Access Journals (Sweden)

    Chun-e Sui

    2018-01-01

    Full Text Available In this paper, the fracture load of cement paste with different water cement ratio, different mineral admixtures, including fly ash, silica fume and slag, is obtained through experiments. the three-dimensional fracture surface is reconstructed and the three-dimensional effective area of the fracture surface is calculated. the effective fracture stress density of different cement paste is obtained. The results show that the polynomial function can accurately describe the relationship between the three-dimensional total area and the tensile strength

  1. Evaluating Heavy Metal Stress Levels in Rice Based on Remote Sensing Phenology.

    Science.gov (United States)

    Liu, Tianjiao; Liu, Xiangnan; Liu, Meiling; Wu, Ling

    2018-03-14

    Heavy metal pollution of croplands is a major environmental problem worldwide. Methods for accurately and quickly monitoring heavy metal stress have important practical significance. Many studies have explored heavy metal stress in rice in relation to physiological function or physiological factors, but few studies have considered phenology, which can be sensitive to heavy metal stress. In this study, we used an integrated Normalized Difference Vegetation Index (NDVI) time-series image set to extract remote sensing phenology. A phenological indicator relatively sensitive to heavy metal stress was chosen from the obtained phenological periods and phenological parameters. The Dry Weight of Roots (WRT), which directly affected by heavy metal stress, was simulated by the World Food Study (WOFOST) model; then, a feature space based on the phenological indicator and WRT was established for monitoring heavy metal stress. The results indicated that the feature space can distinguish the heavy metal stress levels in rice, with accuracy greater than 95% for distinguishing the severe stress level. This finding provides scientific evidence for combining rice phenology and physiological characteristics in time and space, and the method is useful to monitor heavy metal stress in rice.

  2. Reducing Stress Among Mothers in Drug Treatment: A Description of a Mindfulness Based Parenting Intervention.

    Science.gov (United States)

    Short, Vanessa L; Gannon, Meghan; Weingarten, Wendy; Kaltenbach, Karol; LaNoue, Marianna; Abatemarco, Diane J

    2017-06-01

    Background Parenting women with substance use disorder could potentially benefit from interventions designed to decrease stress and improve overall psychosocial health. In this study we assessed whether a mindfulness based parenting (MBP) intervention could be successful in decreasing general and parenting stress in a population of women who are in treatment for substance use disorder and who have infants or young children. Methods MBP participants (N = 59) attended a two-hour session once a week for 12 weeks. Within-group differences on stress outcome measures administered prior to the beginning of the MBP intervention and following the intervention period were investigated using mixed-effects linear regression models accounting for correlations arising from the repeated-measures. Scales assessed for pre-post change included the Perceived Stress Scale-10 (PSS) and the Parenting Stress Index-Short Form (PSI). Results General stress, as measured by the PSS, decreased significantly from baseline to post-intervention. Women with the highest baseline general stress level experienced the greatest change in total stress score. A significant change also occurred across the Parental Distress PSI subscale. Conclusions Findings from this innovative interventional study suggest that the addition of MBP within treatment programs for parenting women with substance use disorder is an effective strategy for reducing stress within this at risk population.

  3. Brain structure in post-traumatic stress disorder: A voxel-based morphometry analysis.

    Science.gov (United States)

    Tan, Liwen; Zhang, Li; Qi, Rongfeng; Lu, Guangming; Li, Lingjiang; Liu, Jun; Li, Weihui

    2013-09-15

    This study compared the difference in brain structure in 12 mine disaster survivors with chronic post-traumatic stress disorder, 7 cases of improved post-traumatic stress disorder symptoms, and 14 controls who experienced the same mine disaster but did not suffer post-traumatic stress disorder, using the voxel-based morphometry method. The correlation between differences in brain structure and post-traumatic stress disorder symptoms was also investigated. Results showed that the gray matter volume was the highest in the trauma control group, followed by the symptoms-improved group, and the lowest in the chronic post-traumatic stress disorder group. Compared with the symptoms-improved group, the gray matter volume in the lingual gyrus of the right occipital lobe was reduced in the chronic post-traumatic stress disorder group. Compared with the trauma control group, the gray matter volume in the right middle occipital gyrus and left middle frontal gyrus was reduced in the symptoms-improved group. Compared with the trauma control group, the gray matter volume in the left superior parietal lobule and right superior frontal gyrus was reduced in the chronic post-traumatic stress disorder group. The gray matter volume in the left superior parietal lobule was significantly positively correlated with the State-Trait Anxiety Inventory subscale score in the symptoms-improved group and chronic post-traumatic stress disorder group (r = 0.477, P = 0.039). Our findings indicate that (1) chronic post-traumatic stress disorder patients have gray matter structural damage in the prefrontal lobe, occipital lobe, and parietal lobe, (2) after post-traumatic stress, the disorder symptoms are improved and gray matter structural damage is reduced, but cannot recover to the trauma-control level, and (3) the superior parietal lobule is possibly associated with chronic post-traumatic stress disorder. Post-traumatic stress disorder patients exhibit gray matter abnormalities.

  4. Brain structure in post-traumatic stress disorder: A voxel-based morphometry analysis

    Science.gov (United States)

    Tan, Liwen; Zhang, Li; Qi, Rongfeng; Lu, Guangming; Li, Lingjiang; Liu, Jun; Li, Weihui

    2013-01-01

    This study compared the difference in brain structure in 12 mine disaster survivors with chronic post-traumatic stress disorder, 7 cases of improved post-traumatic stress disorder symptoms, and 14 controls who experienced the same mine disaster but did not suffer post-traumatic stress disorder, using the voxel-based morphometry method. The correlation between differences in brain structure and post-traumatic stress disorder symptoms was also investigated. Results showed that the gray matter volume was the highest in the trauma control group, followed by the symptoms-improved group, and the lowest in the chronic post-traumatic stress disorder group. Compared with the symptoms-improved group, the gray matter volume in the lingual gyrus of the right occipital lobe was reduced in the chronic post-traumatic stress disorder group. Compared with the trauma control group, the gray matter volume in the right middle occipital gyrus and left middle frontal gyrus was reduced in the symptoms-improved group. Compared with the trauma control group, the gray matter volume in the left superior parietal lobule and right superior frontal gyrus was reduced in the chronic post-traumatic stress disorder group. The gray matter volume in the left superior parietal lobule was significantly positively correlated with the State-Trait Anxiety Inventory subscale score in the symptoms-improved group and chronic post-traumatic stress disorder group (r = 0.477, P = 0.039). Our findings indicate that (1) chronic post-traumatic stress disorder patients have gray matter structural damage in the prefrontal lobe, occipital lobe, and parietal lobe, (2) after post-traumatic stress, the disorder symptoms are improved and gray matter structural damage is reduced, but cannot recover to the trauma-control level, and (3) the superior parietal lobule is possibly associated with chronic post-traumatic stress disorder. Post-traumatic stress disorder patients exhibit gray matter abnormalities. PMID:25206550

  5. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    Science.gov (United States)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  6. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  7. Primary decomposition of zero-dimensional ideals over finite fields

    Science.gov (United States)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  8. Gas hydrates forming and decomposition conditions analysis

    Directory of Open Access Journals (Sweden)

    А. М. Павленко

    2017-07-01

    Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.

  9. Evaluation of Tire/Surfacing/Base Contact Stresses and Texture Depth

    Directory of Open Access Journals (Sweden)

    W.J.vdM. Steyn

    2015-03-01

    Full Text Available Tire rolling resistance has a major impact on vehicle fuel consumption. Rolling resistance is the loss of energy due to the interaction between the tire and the pavement surface. This interaction is a complicated combination of stresses and strains which depend on both tire and pavement related factors. These include vehicle speed, vehicle weight, tire material and type, road camber, tire inflation pressure, pavement surfacing texture etc. In this paper the relationship between pavement surface texture depth and tire/surfacing contact stress and area is investigated. Texture depth and tire/surfacing contact stress were measured for a range of tire inflation pressures on five different pavement surfaces. In the analysis the relationship between texture and the generated contact stresses as well as the contact stress between the surfacing and base layer are presented and discussed, and the anticipated effect of these relationships on the rolling resistance of vehicles on the surfacings, and subsequent vehicle fuel economy discussed.

  10. Investigation of the Residual Stress State in an Epoxy Based Specimen

    DEFF Research Database (Denmark)

    Baran, Ismet; Jakobsen, Johnny; Andreasen, Jens Henrik

    2015-01-01

    Abstract. Process induced residual stresses may play an important role under service loading conditions for fiber reinforced composite. They may initiate premature cracks and alter the internal stress level. Therefore, the developed numerical models have to be validated with the experimental...... observations. In the present work, the formation of the residual stresses/strains are captured from experimental measurements and numerical models. An epoxy/steel based sample configuration is considered which creates an in-plane biaxial stress state during curing of the resin. A hole drilling process...... material models, i.e. cure kinetics, elastic modulus, CTE, chemical shrinkage, etc. together with the drilling process using the finite element method. The measured and predicted in-plane residual strain states are compared for the epoxy/metal biaxial stress specimen....

  11. Stress-based topology optimization of concrete structures with prestressing reinforcements

    Science.gov (United States)

    Luo, Yangjun; Wang, Michael Yu; Deng, Zichen

    2013-11-01

    Following the extended two-material density penalization scheme, a stress-based topology optimization method for the layout design of prestressed concrete structures is proposed. The Drucker-Prager yield criterion is used to predict the asymmetrical strength failure of concrete. The prestress is considered by making a reasonable assumption on the prestressing orientation in each element and adding an additional load vector to the structural equilibrium function. The proposed optimization model is thus formulated as to minimize the reinforcement material volume under Drucker-Prager yield constraints on elemental concrete local stresses. In order to give a reasonable definition of concrete local stress and prevent the stress singularity phenomenon, the local stress interpolation function and the ɛ -relaxation technique are adopted. The topology optimization problem is solved using the method of moving asymptotes combined with an active set strategy. Numerical examples are given to show the efficiency of the proposed optimization method in the layout design of prestressed concrete structures.

  12. Evaluation of the Factors of Russian Regions’ Convergence / Divergence in the Level of Budget Provision Based on the Decomposition of the Theil - Bernoulli Index

    Directory of Open Access Journals (Sweden)

    Marina Yuryevna Malkina

    2016-09-01

    Full Text Available The study focuses on the Russian regions’ disparities in the level of budget expenditures per capita and their dynamics. The paper assesses contribution of main factors and their correlation, as well as the stages of budget process, to the regional imbalances in the public sector. The author also presents regions’ budget expenditures per capita in a form of five-factor multiplicative model which at the same time demonstrates the sequence of the stages of budget process. To estimate regions’ inequality in budget expenditures and other related variables the researcher employs the Theil - Bernoulli index which is sensitive to excessive poverty. Its decomposition, made on the basis of the Duro and Esteban technique, allows evaluating the structure of inter- regional disparities in the public sector. The results include following: 1 static assessments of the factors contribution to the regions’ convergence in budget expenditure per capita at the stages of GRP production, receipt and distribution of taxes among levels of budget system, the stages of attraction of inter-budgetary support and budget deficit financing; 2 dynamic assessments of the factors contribution to regions’ convergence / divergence in the level of budgetary expenditure per capita for 9 years. The findings may be useful in optimizing the policy of inter-budgetary equalization in Russia

  13. A new decomposition-based computer-aided molecular/mixture design methodology for the design of optimal solvents and solvent mixtures

    DEFF Research Database (Denmark)

    Karunanithi, A.T.; Achenie, L.E.K.; Gani, Rafiqul

    2005-01-01

    This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective is to be optim......This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective...... is to be optimized subject to structural, property, and process constraints. The general molecular/mixture design problem is divided into two parts. For optimal single-compound design, the first part is solved. For mixture design, the single-compound design is first carried out to identify candidates...... and then the second part is solved to determine the optimal mixture. The decomposition of the CAMD MINLP model into relatively easy to solve subproblems is essentially a partitioning of the constraints from the original set. This approach is illustrated through two case studies. The first case study involves...

  14. Brain regions engaged by part- and whole-task performance in a video game: a model-based test of the decomposition hypothesis.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Anderson, Abraham R; Poole, Ben; Qin, Yulin

    2011-12-01

    Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model's predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits.

  15. A comparative study on stress and compliance based structural topology optimization

    Science.gov (United States)

    Hailu Shimels, G.; Dereje Engida, W.; Fakhruldin Mohd, H.

    2017-10-01

    Most of structural topology optimization problems have been formulated and solved to either minimize compliance or weight of a structure under volume or stress constraints, respectively. Even if, a lot of researches are conducted on these two formulation techniques separately, there is no clear comparative study between the two approaches. This paper intends to compare these formulation techniques, so that an end user or designer can choose the best one based on the problems they have. Benchmark problems under the same boundary and loading conditions are defined, solved and results are compared based on these formulations. Simulation results shows that the two formulation techniques are dependent on the type of loading and boundary conditions defined. Maximum stress induced in the design domain is higher when the design domains are formulated using compliance based formulations. Optimal layouts from compliance minimization formulation has complex layout than stress based ones which may lead the manufacturing of the optimal layouts to be challenging. Optimal layouts from compliance based formulations are dependent on the material to be distributed. On the other hand, optimal layouts from stress based formulation are dependent on the type of material used to define the design domain. High computational time for stress based topology optimization is still a challenge because of the definition of stress constraints at element level. Results also shows that adjustment of convergence criterions can be an alternative solution to minimize the maximum stress developed in optimal layouts. Therefore, a designer or end user should choose a method of formulation based on the design domain defined and boundary conditions considered.

  16. A new sensor for stress measurement based on blood flow fluctuations

    Science.gov (United States)

    Fine, I.; Kaminsky, A. V.; Shenkman, L.

    2016-03-01

    It is widely recognized that effective stress management could have a dramatic impact on health care and preventive medicine. In order to meet this need, efficient and seamless sensing and analytic tools for the non-invasive stress monitoring during daily life are required. The existing sensors still do not meet the needs in terms of specificity and robustness. We utilized a miniaturized dynamic light scattering sensor (mDLS) which is specially adjusted to measure skin blood flow fluctuations and provides multi- parametric capabilities. Based on the measured dynamic light scattering signal from the red blood cells flowing in skin, a new concept of hemodynamic indexes (HI) and oscillatory hemodynamic indexes (OHI) have been developed. This approach was utilized for stress level assessment for a few usecase scenario. The new stress index was generated through the HI and OHI parameters. In order to validate this new non-invasive stress index, a group of 19 healthy volunteers was studied by measuring the mDLS sensor located on the wrist. Mental stress was induced by using the cognitive dissonance test of Stroop. We found that OHIs indexes have high sensitivity to the mental stress response for most of the tested subjects. In addition, we examined the capability of using this new stress index for the individual monitoring of the diurnal stress level. We found that the new stress index exhibits similar trends as reported for to the well-known diurnal behavior of cortisol levels. Finally, we demonstrated that this new marker provides good sensitivity and specificity to the stress response to sound and musical emotional arousal.

  17. ASME stress linearization and classification - a discussion based on a case study

    International Nuclear Information System (INIS)

    Miranda, Carlos A. de J.; Faloppa, Altair A.; Mattar Neto, Miguel; Fainer, Gerson

    2011-01-01

    The ASME code, specially in its Nuclear Division (Subsection NB - Class I Components), gives some recommendations to the structural analyst on how to perform the verifications required to prove the design as good as the by-analysis prevented failures modes. Each of these failure modes has specific stress limits which are established based on simple but conservative hypothesis like the material perfectly plastic behavior and the shell theory with its typical membrane and bending stresses with linear distribution along the thickness. Other detail to keep in mind is the code distinction between primary and secondary stresses (respectively, stress that came due to equilibrium and due to displacement compatibility). In general, the numerical models used in the analyses are developed with plane or 3D solid elements and due this fact no direct comparison with the code limits can be done and, besides that, the programs do not distinguish between primary and secondary stresses. Mostly, the later are produced due to the temperature variation but they also appear near discontinuities. Sometimes, this classification is not so clear or direct. To perform the required ASME Code verifications the analyst should obtain the membrane and bending stresses from the plane or 3-D model which is called stress linearization and, also, should classify them as primary and secondary. (The excess between the maximum stress at a point and the sum of these linearized values is called peak stress and is included in the fatigue verification.) This task, most of the time is not a simple one due to the nature of the involved load and/or the complex geometry under analysis. In fact, there are several studies discussing on how to perform these stress classification and linearization. The present paper shows a discussion on how to perform these verifications based on a generic geometry found in many plants, from petrochemical to nuclear, which emphasizes some of theses issues. (author)

  18. ASME stress linearization and classification - a discussion based on a case study

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, Carlos A. de J.; Faloppa, Altair A.; Mattar Neto, Miguel; Fainer, Gerson, E-mail: cmiranda@ipen.b, E-mail: afaloppa@ipen.b, E-mail: mmattar@ipen.b, E-mail: gfainer@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The ASME code, specially in its Nuclear Division (Subsection NB - Class I Components), gives some recommendations to the structural analyst on how to perform the verifications required to prove the design as good as the by-analysis prevented failures modes. Each of these failure modes has specific stress limits which are established based on simple but conservative hypothesis like the material perfectly plastic behavior and the shell theory with its typical membrane and bending stresses with linear distribution along the thickness. Other detail to keep in mind is the code distinction between primary and secondary stresses (respectively, stress that came due to equilibrium and due to displacement compatibility). In general, the numerical models used in the analyses are developed with plane or 3D solid elements and due this fact no direct comparison with the code limits can be done and, besides that, the programs do not distinguish between primary and secondary stresses. Mostly, the later are produced due to the temperature variation but they also appear near discontinuities. Sometimes, this classification is not so clear or direct. To perform the required ASME Code verifications the analyst should obtain the membrane and bending stresses from the plane or 3-D model which is called stress linearization and, also, should classify them as primary and secondary. (The excess between the maximum stress at a point and the sum of these linearized values is called peak stress and is included in the fatigue verification.) This task, most of the time is not a simple one due to the nature of the involved load and/or the complex geometry under analysis. In fact, there are several studies discussing on how to perform these stress classification and linearization. The present paper shows a discussion on how to perform these verifications based on a generic geometry found in many plants, from petrochemical to nuclear, which emphasizes some of theses issues. (author)

  19. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  20. A Study on the Job Stress Assessment in Korean Nuclear Power Plants based on KOSS

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seong Hwan; Lee, Yong Hee; Lee, Jung Woon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Sung, Sook Hee; Jung, Kwang Hee; Jung, Yeon Sub [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2010-05-15

    Job stress is a harmful physical and emotional response that occurs when there is a poor match between job demands and the capabilities, resources, or needs of the worker. Stress-related disorders encompass a broad array of conditions, including psychological disorders (e.g., depression, anxiety, posttraumatic stress disorder) and other types of emotional strain (e.g., dissatisfaction, fatigue, tension, etc.), maladaptive behaviors (e.g., aggression, substance abuse), and cognitive impairment (e.g., concentration and memory problems). In turn, these conditions may lead to poor work performance or even injury. Job stress is also associated with various biological reactions that may lead ultimately to compromised health, such as cardiovascular disease or in extreme cases, death. In Korea, organizational job stress factors were investigated for the jobs in nuclear power plants that are operated based on procedures. Especially, the occupational stress scale for Korean employees (KOSS) was developed. The KOSS has 8 subscales by using a factor analysis and validation process in order to measure stress at work and to find methods for the prevention of stressors. In this point of view, the RHRI (Radiation Health Research Institute of KHNP (Korea Hydro and Nuclear Power) assessed how wellsuited employees were for their job during their health examination in 2009. In this study the present condition of employee's stress level is investigated to find a way to manage their stressors