WorldWideScience

Sample records for empirical mode decomposition

  1. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    Science.gov (United States)

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  2. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  3. Palm vein recognition based on directional empirical mode decomposition

    Science.gov (United States)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  4. Improved Empirical Mode Decomposition Algorithm of Processing Complex Signal for IoT Application

    OpenAIRE

    Yang, Xianzhao; Cheng, Gengguo; Liu, Huikang

    2015-01-01

    Hilbert-Huang transform is widely used in signal analysis. However, due to its inadequacy in estimating both the maximum and the minimum values of the signals at both ends of the border, traditional HHT is easy to produce boundary error in empirical mode decomposition (EMD) process. To overcome this deficiency, this paper proposes an enhanced empirical mode decomposition algorithm for processing complex signal. Our work mainly focuses on two aspects. On one hand, we develop a technique to obt...

  5. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    Science.gov (United States)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  6. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  7. Noise reduction in digital speckle pattern interferometry using bidimensional empirical mode decomposition

    International Nuclear Information System (INIS)

    Bernini, Maria Belen; Federico, Alejandro; Kaufmann, Guillermo H.

    2008-01-01

    We propose a bidimensional empirical mode decomposition (BEMD) method to reduce speckle noise in digital speckle pattern interferometry (DSPI) fringes. The BEMD method is based on a sifting process that decomposes the DSPI fringes in a finite set of subimages represented by high and low frequency oscillations, which are named modes. The sifting process assigns the high frequency information to the first modes, so that it is possible to discriminate speckle noise from fringe information, which is contained in the remaining modes. The proposed method is a fully data-driven technique, therefore neither fixed basis functions nor operator intervention are required. The performance of the BEMD method to denoise DSPI fringes is analyzed using computer-simulated data, and the results are also compared with those obtained by means of a previously developed one-dimensional empirical mode decomposition approach. An application of the proposed BEMD method to denoise experimental fringes is also presented

  8. Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition

    OpenAIRE

    Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung

    2015-01-01

    Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclos...

  9. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  10. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  11. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition.

    Science.gov (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng

    2017-10-02

    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  12. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    Science.gov (United States)

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  13. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    Science.gov (United States)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault

  14. Minimizing the trend effect on detrended cross-correlation analysis with empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhao Xiaojun; Shang Pengjian; Zhao Chuang; Wang Jing; Tao Rui

    2012-01-01

    Highlights: ► Investigate the effects of linear, exponential and periodic trends on DCCA. ► Apply empirical mode decomposition to extract trend term. ► Strong and monotonic trends are successfully eliminated. ► Get the cross-correlation exponent in a persistent behavior without crossover. - Abstract: Detrended cross-correlation analysis (DCCA) is a scaling method commonly used to estimate long-range power law cross-correlation in non-stationary signals. However, the susceptibility of DCCA to trends makes the scaling results difficult to analyze due to spurious crossovers. We artificially generate long-range cross-correlated signals and systematically investigate the effect of linear, exponential and periodic trends. Specifically to the crossovers raised by trends, we apply empirical mode decomposition method which decomposes underlying signals into several intrinsic mode functions (IMF) and a residual trend. After the removal of residual term, strong and monotonic trends such as linear and exponential trends are successfully eliminated. But periodic trend cannot be separated out according to the criterion of IMF, which can be eliminated by Fourier transform. As a special case of DCCA, detrended fluctuation analysis presents similar results.

  15. Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.

    Science.gov (United States)

    Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun

    2009-05-01

    Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.

  16. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  17. Investigating complex patterns of blocked intestinal artery blood pressure signals by empirical mode decomposition and linguistic analysis

    International Nuclear Information System (INIS)

    Yeh, J-R; Lin, T-Y; Shieh, J-S; Chen, Y; Huang, N E; Wu, Z; Peng, C-K

    2008-01-01

    In this investigation, surgical operations of blocked intestinal artery have been conducted on pigs to simulate the condition of acute mesenteric arterial occlusion. The empirical mode decomposition method and the algorithm of linguistic analysis were applied to verify the blood pressure signals in simulated situation. We assumed that there was some information hidden in the high-frequency part of the blood pressure signal when an intestinal artery is blocked. The empirical mode decomposition method (EMD) has been applied to decompose the intrinsic mode functions (IMF) from a complex time series. But, the end effects and phenomenon of intermittence damage the consistence of each IMF. Thus, we proposed the complementary ensemble empirical mode decomposition method (CEEMD) to solve the problems of end effects and the phenomenon of intermittence. The main wave of blood pressure signals can be reconstructed by the main components, identified by Monte Carlo verification, and removed from the original signal to derive a riding wave. Furthermore, the concept of linguistic analysis was applied to design the blocking index to verify the pattern of riding wave of blood pressure using the measurements of dissimilarity. Blocking index works well to identify the situation in which the sampled time series of blood pressure signal was recorded. Here, these two totally different algorithms are successfully integrated and the existence of the existence of information hidden in high-frequency part of blood pressure signal has been proven

  18. Investigating complex patterns of blocked intestinal artery blood pressure signals by empirical mode decomposition and linguistic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, J-R; Lin, T-Y; Shieh, J-S [Department of Mechanical Engineering, Yuan Ze University, 135 Far-East Road, Chung-Li, Taoyuan, Taiwan (China); Chen, Y [Far Eastern Memorial Hospital, Taiwan (China); Huang, N E [Research Center for Adaptive Data Analysis, National Central University, Taiwan (China); Wu, Z [Center for Ocean-Land-Atmosphere Studies (United States); Peng, C-K [Beth Israel Deaconess Medical Center, Harvard Medical School (United States)], E-mail: s939205@ mail.yzu.edu.tw

    2008-02-15

    In this investigation, surgical operations of blocked intestinal artery have been conducted on pigs to simulate the condition of acute mesenteric arterial occlusion. The empirical mode decomposition method and the algorithm of linguistic analysis were applied to verify the blood pressure signals in simulated situation. We assumed that there was some information hidden in the high-frequency part of the blood pressure signal when an intestinal artery is blocked. The empirical mode decomposition method (EMD) has been applied to decompose the intrinsic mode functions (IMF) from a complex time series. But, the end effects and phenomenon of intermittence damage the consistence of each IMF. Thus, we proposed the complementary ensemble empirical mode decomposition method (CEEMD) to solve the problems of end effects and the phenomenon of intermittence. The main wave of blood pressure signals can be reconstructed by the main components, identified by Monte Carlo verification, and removed from the original signal to derive a riding wave. Furthermore, the concept of linguistic analysis was applied to design the blocking index to verify the pattern of riding wave of blood pressure using the measurements of dissimilarity. Blocking index works well to identify the situation in which the sampled time series of blood pressure signal was recorded. Here, these two totally different algorithms are successfully integrated and the existence of the existence of information hidden in high-frequency part of blood pressure signal has been proven.

  19. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    OpenAIRE

    Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi

    2017-01-01

    Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...

  20. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  1. Analysis of respiratory mechanomyographic signals by means of the empirical mode decomposition

    International Nuclear Information System (INIS)

    Torres, A; Jane, R; Fiz, J A; Laciar, E; Galdiz, J B; Gea, J; Morera, J

    2007-01-01

    The study of the mechanomyographic (MMG) signals of respiratory muscles is a promising technique in order to evaluate the respiratory muscles effort. A critical point in MMG studies is the selection of the cut-off frequency in order to separate the low frequency (LF) component (basically due to gross movement of the muscle or of the body) and the high frequency (HF) component (related with the vibration of the muscle fibres during contraction). In this study, we propose to use the Empirical Mode Decomposition method in order to analyze the Intrinsic Mode Functions of MMG signals of the diaphragm muscle, acquired by means of a capacitive accelerometer applied on the costal wall. The method was tested on an animal model, with two incremental respiratory protocols performed by two non anesthetized mongrel dogs. The proposed EMD based method seems to be a useful tool to eliminate the low frequency component of MMG signals. The obtained correlation coefficients between respiratory and MMG parameters were higher than the ones obtained with a Wavelet multiresolution decomposition method utilized in a previous work

  2. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    Science.gov (United States)

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  3. Empirical mode decomposition and k-nearest embedding vectors for timely analyses of antibiotic resistance trends.

    Science.gov (United States)

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends.

  4. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  5. Application of empirical mode decomposition method for characterization of random vibration signals

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    2016-07-01

    Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.

  6. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    Science.gov (United States)

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  7. Noise-assisted data processing with empirical mode decomposition in biomedical signals.

    Science.gov (United States)

    Karagiannis, Alexandros; Constantinou, Philip

    2011-01-01

    In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.

  8. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    Science.gov (United States)

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  9. Correlation of Respiratory Signals and Electrocardiogram Signals via Empirical Mode Decomposition

    KAUST Repository

    El Fiky, Ahmed Osama

    2011-05-24

    Recently Electrocardiogram (ECG) signals are being broadly used as an essential diagnosing tool in different clinical applications as they carry a reliable representation not only for cardiac activities, but also for other associated biological processes, like respiration. However, the process of recording and collecting them has usually suffered from the presence of some undesired noises, which in turn affects the reliability of such representations.Therefore, de-noising ECG signals became a hot research field for signal processing experts to ensure better and clear representation of the different cardiac activities. Given the nonlinear and non-stationary properties of ECGs, it is not a simple task to cancel the undesired noise terms without affecting the biological physics of them. In this study, we are interested in correlating the ECG signals with respiratory parameters, specifically the lung volume and lung pressure. We have focused on the concept of de-noising ECG signals by means of signal decomposition using an algorithm called the Empirical Mode Decomposition (EMD) where the original ECG signals are being decomposed into a set of intrinsic mode functions (IMF). Then, we have provided criteria based on which some of these IMFs have been adapted to reconstruct de-noised ECG version. Finally, we have utilized de-noised ECGs as well as IMFs for to study the correlation with lung volume and lung pressure. These correlation studies have showed some clear resemblance especially between the oscillations of ECGs and lung pressures.

  10. A Novel Empirical Mode Decomposition With Support Vector Regression for Wind Speed Forecasting.

    Science.gov (United States)

    Ren, Ye; Suganthan, Ponnuthurai Nagaratnam; Srikanth, Narasimalu

    2016-08-01

    Wind energy is a clean and an abundant renewable energy source. Accurate wind speed forecasting is essential for power dispatch planning, unit commitment decision, maintenance scheduling, and regulation. However, wind is intermittent and wind speed is difficult to predict. This brief proposes a novel wind speed forecasting method by integrating empirical mode decomposition (EMD) and support vector regression (SVR) methods. The EMD is used to decompose the wind speed time series into several intrinsic mode functions (IMFs) and a residue. Subsequently, a vector combining one historical data from each IMF and the residue is generated to train the SVR. The proposed EMD-SVR model is evaluated with a wind speed data set. The proposed EMD-SVR model outperforms several recently reported methods with respect to accuracy or computational complexity.

  11. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    Directory of Open Access Journals (Sweden)

    Xiwen Qin

    2017-01-01

    Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.

  12. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    Science.gov (United States)

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  13. Health monitoring of pipeline girth weld using empirical mode decomposition

    Science.gov (United States)

    Rezaei, Davood; Taheri, Farid

    2010-05-01

    In the present paper the Hilbert-Huang transform (HHT), as a time-series analysis technique, has been combined with a local diagnostic approach in an effort to identify flaws in pipeline girth welds. This method is based on monitoring the free vibration signals of the pipe at its healthy and flawed states, and processing the signals through the HHT and its associated signal decomposition technique, known as empirical mode decomposition (EMD). The EMD method decomposes the vibration signals into a collection of intrinsic mode functions (IMFs). The deviations in structural integrity, measured from a healthy-state baseline, are subsequently evaluated by two damage sensitive parameters. The first is a damage index, referred to as the EM-EDI, which is established based on an energy comparison of the first or second IMF of the vibration signals, before and after occurrence of damage. The second parameter is the evaluation of the lag in instantaneous phase, a quantity derived from the HHT. In the developed methodologies, the pipe's free vibration is monitored by piezoceramic sensors and a laser Doppler vibrometer. The effectiveness of the proposed techniques is demonstrated through a set of numerical and experimental studies on a steel pipe with a mid-span girth weld, for both pressurized and nonpressurized conditions. To simulate a crack, a narrow notch is cut on one side of the girth weld. Several damage scenarios, including notches of different depths and at various locations on the pipe, are investigated. Results from both numerical and experimental studies reveal that in all damage cases the sensor located at the notch vicinity could successfully detect the notch and qualitatively predict its severity. The effect of internal pressure on the damage identification method is also monitored. Overall, the results are encouraging and promise the effectiveness of the proposed approaches as inexpensive systems for structural health monitoring purposes.

  14. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    Science.gov (United States)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  15. Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition.

    Science.gov (United States)

    Wang, Fu-Tai; Chan, Hsiao-Lung; Wang, Chun-Li; Jian, Hung-Ming; Lin, Sheng-Hsiung

    2015-07-07

    Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD) can decompose a signal into several intrinsic mode functions (IMFs) that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method.

  16. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  17. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    Science.gov (United States)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  18. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    Science.gov (United States)

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  19. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    Science.gov (United States)

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  20. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  1. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  2. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    Science.gov (United States)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  3. Instantaneous Respiratory Estimation from Thoracic Impedance by Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Fu-Tai Wang

    2015-07-01

    Full Text Available Impedance plethysmography provides a way to measure respiratory activity by sensing the change of thoracic impedance caused by inspiration and expiration. This measurement imposes little pressure on the body and uses the human body as the sensor, thereby reducing the need for adjustments as body position changes and making it suitable for long-term or ambulatory monitoring. The empirical mode decomposition (EMD can decompose a signal into several intrinsic mode functions (IMFs that disclose nonstationary components as well as stationary components and, similarly, capture respiratory episodes from thoracic impedance. However, upper-body movements usually produce motion artifacts that are not easily removed by digital filtering. Moreover, large motion artifacts disable the EMD to decompose respiratory components. In this paper, motion artifacts are detected and replaced by the data mirrored from the prior and the posterior before EMD processing. A novel intrinsic respiratory reconstruction index that considers both global and local properties of IMFs is proposed to define respiration-related IMFs for respiration reconstruction and instantaneous respiratory estimation. Based on the experiments performing a series of static and dynamic physical activates, our results showed the proposed method had higher cross correlations between respiratory frequencies estimated from thoracic impedance and those from oronasal airflow based on small window size compared to the Fourier transform-based method.

  4. Empirical mode decomposition of the ECG signal for noise removal

    Science.gov (United States)

    Khan, Jesmin; Bhuiyan, Sharif; Murphy, Gregory; Alam, Mohammad

    2011-04-01

    Electrocardiography is a diagnostic procedure for the detection and diagnosis of heart abnormalities. The electrocardiogram (ECG) signal contains important information that is utilized by physicians for the diagnosis and analysis of heart diseases. So good quality ECG signal plays a vital role for the interpretation and identification of pathological, anatomical and physiological aspects of the whole cardiac muscle. However, the ECG signals are corrupted by noise which severely limit the utility of the recorded ECG signal for medical evaluation. The most common noise presents in the ECG signal is the high frequency noise caused by the forces acting on the electrodes. In this paper, we propose a new ECG denoising method based on the empirical mode decomposition (EMD). The proposed method is able to enhance the ECG signal upon removing the noise with minimum signal distortion. Simulation is done on the MIT-BIH database to verify the efficacy of the proposed algorithm. Experiments show that the presented method offers very good results to remove noise from the ECG signal.

  5. Empirical mode decomposition and Hilbert transforms for analysis of oil-film interferograms

    International Nuclear Information System (INIS)

    Chauhan, Kapil; Ng, Henry C H; Marusic, Ivan

    2010-01-01

    Oil-film interferometry is rapidly becoming the preferred method for direct measurement of wall shear stress in studies of wall-bounded turbulent flows. Although being widely accepted as the most accurate technique, it does have inherent measurement uncertainties, one of which is associated with determining the fringe spacing. This is the focus of this paper. Conventional analysis methods involve a certain level of user input and thus some subjectivity. In this paper, we consider empirical mode decomposition (EMD) and the Hilbert transform as an alternative tool for analyzing oil-film interferograms. In contrast to the commonly used Fourier-based techniques, this new method is less subjective and, as it is based on the Hilbert transform, is superior for treating amplitude and frequency modulated data. This makes it particularly robust to wide differences in the quality of interferograms

  6. Empirical Mode Decomposition of the atmospheric wave field

    Directory of Open Access Journals (Sweden)

    A. J. McDonald

    2007-03-01

    Full Text Available This study examines the utility of the Empirical Mode Decomposition (EMD time-series analysis technique to separate the horizontal wind field observed by the Scott Base MF radar (78° S, 167° E into its constituent parts made up of the mean wind, gravity waves, tides, planetary waves and instrumental noise. Analysis suggests that EMD effectively separates the wind field into a set of Intrinsic Mode Functions (IMFs which can be related to atmospheric waves with different temporal scales. The Intrinsic Mode Functions resultant from application of the EMD technique to Monte-Carlo simulations of white- and red-noise processes are compared to those obtained from the measurements and are shown to be significantly different statistically. Thus, application of the EMD technique to the MF radar horizontal wind data can be used to prove that this data contains information on internal gravity waves, tides and planetary wave motions.

    Examination also suggests that the EMD technique has the ability to highlight amplitude and frequency modulations in these signals. Closer examination of one of these regions of amplitude modulation associated with dominant periods close to 12 h is suggested to be related to a wave-wave interaction between the semi-diurnal tide and a planetary wave. Application of the Hilbert transform to the IMFs forms a Hilbert-Huang spectrum which provides a way of viewing the data in a similar manner to the analysis from a continuous wavelet transform. However, the fact that the basis function of EMD is data-driven and does not need to be selected a priori is a major advantage. In addition, the skeleton diagrams, produced from the results of the Hilbert-Huang spectrum, provide a method of presentation which allows quantitative information on the instantaneous period and amplitude squared to be displayed as a function of time. Thus, it provides a novel way to view frequency and amplitude-modulated wave phenomena and potentially non

  7. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  8. Investigation of Kelvin wave periods during Hai-Tang typhoon using Empirical Mode Decomposition

    Science.gov (United States)

    Kishore, P.; Jayalakshmi, J.; Lin, Pay-Liam; Velicogna, Isabella; Sutterley, Tyler C.; Ciracì, Enrico; Mohajerani, Yara; Kumar, S. Balaji

    2017-11-01

    Equatorial Kelvin waves (KWs) are fundamental components of the tropical climate system. In this study, we investigate Kelvin waves (KWs) during the Hai-Tang typhoon of 2005 using Empirical Mode Decomposition (EMD) of regional precipitation, zonal and meridional winds. For the analysis, we use daily precipitation datasets from the Global Precipitation Climatology Project (GPCP) and wind datasets from the European Centre for Medium-Range Weather Forecasts (ECMWF) Interim Re-analysis (ERA-Interim). As an additional measurement, we use in-situ precipitation datasets from rain-gauges over the Taiwan region. The maximum accumulated precipitation was approximately 2400 mm during the period July 17-21, 2005 over the southwestern region of Taiwan. The spectral analysis using the wind speed at 950 hPa found in the 2nd, 3rd, and 4th intrinsic mode functions (IMFs) reveals prevailing Kelvin wave periods of ∼3 days, ∼4-6 days, and ∼6-10 days, respectively. From our analysis of precipitation datasets, we found the Kelvin waves oscillated with periods between ∼8 and 20 days.

  9. Empirical Mode Decomposition on the sphere: application to the spatial scales of surface temperature variations

    Directory of Open Access Journals (Sweden)

    N. Fauchereau

    2008-06-01

    Full Text Available Empirical Mode Decomposition (EMD is applied here in two dimensions over the sphere to demonstrate its potential as a data-adaptive method of separating the different scales of spatial variability in a geophysical (climatological/meteorological field. After a brief description of the basics of the EMD in 1 then 2 dimensions, the principles of its application on the sphere are explained, in particular via the use of a zonal equal area partitioning. EMD is first applied to an artificial dataset, demonstrating its capability in extracting the different (known scales embedded in the field. The decomposition is then applied to a global mean surface temperature dataset, and we show qualitatively that it extracts successively larger scales of temperature variations related, for example, to topographic and large-scale, solar radiation forcing. We propose that EMD can be used as a global data-adaptive filter, which will be useful in analysing geophysical phenomena that arise as the result of forcings at multiple spatial scales.

  10. Hour-Ahead Wind Speed and Power Forecasting Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Ying-Yi Hong

    2013-11-01

    Full Text Available Operation of wind power generation in a large farm is quite challenging in a smart grid owing to uncertain weather conditions. Consequently, operators must accurately forecast wind speed/power in the dispatch center to carry out unit commitment, real power scheduling and economic dispatch. This work presents a novel method based on the integration of empirical mode decomposition (EMD with artificial neural networks (ANN to forecast the short-term (1 h ahead wind speed/power. First, significant parameters for training the ANN are identified using the correlation coefficients. These significant parameters serve as inputs of the ANN. Owing to the volatile and intermittent wind speed/power, the historical time series of wind speed/power is decomposed into several intrinsic mode functions (IMFs and a residual function through EMD. Each IMF becomes less volatile and therefore increases the accuracy of the neural network. The final forecasting results are achieved by aggregating all individual forecasting results from all IMFs and their corresponding residual functions. Real data related to the wind speed and wind power measured at a wind-turbine generator in Taiwan are used for simulation. The wind speed forecasting and wind power forecasting for the four seasons are studied. Comparative studies between the proposed method and traditional methods (i.e., artificial neural network without EMD, autoregressive integrated moving average (ARIMA, and persistence method are also introduced.

  11. A new approach for crude oil price analysis based on empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shou-Yang; Lai, K.K.

    2008-01-01

    The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)

  12. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    International Nuclear Information System (INIS)

    Han, G.; Lin, B.; Xu, Z.

    2017-01-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  13. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  14. The use of the empirical mode decomposition for the identification of mean field aligned reference frames

    Directory of Open Access Journals (Sweden)

    Mauro Regi

    2017-01-01

    Full Text Available The magnetic field satellite data are usually referred to geocentric coordinate reference frame. Conversely, the magnetohydrodynamic waves modes in magnetized plasma depend on the ambient magnetic field, and is then useful to rotate the magnetic field measurements into the mean field aligned (MFA coordinate system. This reference frame is useful to study the ultra low frequency magnetic field variations along the direction of the mean field and perpendicularly to it. In order to identify the mean magnetic field the classical moving average (MAVG approach is usually adopted but, under particular conditions, this procedure induces undesired features, such as spectral alteration in the rotated components. We discuss these aspects promoting an alternative and more efficient method for mean field aligned projection, based on the empirical mode decomposition (EMD.

  15. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    International Nuclear Information System (INIS)

    Wang Wen-Bo; Zhang Xiao-Dong; Chang Yuchan; Wang Xiang-Li; Wang Zhao; Chen Xi; Zheng Lei

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. (paper)

  16. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  17. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    Science.gov (United States)

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability

  18. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  19. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    Science.gov (United States)

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  20. A Frequency-Weighted Energy Operator and complementary ensemble empirical mode decomposition for bearing fault detection

    Science.gov (United States)

    Imaouchen, Yacine; Kedadouche, Mourad; Alkama, Rezak; Thomas, Marc

    2017-01-01

    Signal processing techniques for non-stationary and noisy signals have recently attracted considerable attentions. Among them, the empirical mode decomposition (EMD) which is an adaptive and efficient method for decomposing signals from high to low frequencies into intrinsic mode functions (IMFs). Ensemble EMD (EEMD) is proposed to overcome the mode mixing problem of the EMD. In the present paper, the Complementary EEMD (CEEMD) is used for bearing fault detection. As a noise-improved method, the CEEMD not only overcomes the mode mixing, but also eliminates the residual of added white noise persisting into the IMFs and enhance the calculation efficiency of the EEMD method. Afterward, a selection method is developed to choose relevant IMFs containing information about defects. Subsequently, a signal is reconstructed from the sum of relevant IMFs and a Frequency-Weighted Energy Operator is tailored to extract both the amplitude and frequency modulations from the selected IMFs. This operator outperforms the conventional energy operator and the enveloping methods, especially in the presence of strong noise and multiple vibration interferences. Furthermore, simulation and experimental results showed that the proposed method improves performances for detecting the bearing faults. The method has also high computational efficiency and is able to detect the fault at an early stage of degradation.

  1. Temporal associations between weather and headache: analysis by empirical mode decomposition.

    Directory of Open Access Journals (Sweden)

    Albert C Yang

    Full Text Available BACKGROUND: Patients frequently report that weather changes trigger headache or worsen existing headache symptoms. Recently, the method of empirical mode decomposition (EMD has been used to delineate temporal relationships in certain diseases, and we applied this technique to identify intrinsic weather components associated with headache incidence data derived from a large-scale epidemiological survey of headache in the Greater Taipei area. METHODOLOGY/PRINCIPAL FINDINGS: The study sample consisted of 52 randomly selected headache patients. The weather time-series parameters were detrended by the EMD method into a set of embedded oscillatory components, i.e. intrinsic mode functions (IMFs. Multiple linear regression models with forward stepwise methods were used to analyze the temporal associations between weather and headaches. We found no associations between the raw time series of weather variables and headache incidence. For decomposed intrinsic weather IMFs, temperature, sunshine duration, humidity, pressure, and maximal wind speed were associated with headache incidence during the cold period, whereas only maximal wind speed was associated during the warm period. In analyses examining all significant weather variables, IMFs derived from temperature and sunshine duration data accounted for up to 33.3% of the variance in headache incidence during the cold period. The association of headache incidence and weather IMFs in the cold period coincided with the cold fronts. CONCLUSIONS/SIGNIFICANCE: Using EMD analysis, we found a significant association between headache and intrinsic weather components, which was not detected by direct comparisons of raw weather data. Contributing weather parameters may vary in different geographic regions and different seasons.

  2. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  3. Noise Reduction, Atmospheric Pressure Admittance Estimation and Long-Period Component Extraction in Time-Varying Gravity Signals Using Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Linsong Wang

    2015-01-01

    Full Text Available Time-varying gravity signals, with their nonlinear, non-stationary and multi-scale characteristics, record the physical responses of various geodynamic processes and consist of a blend of signals with various periods and amplitudes, corresponding to numerous phenomena. Superconducting gravimeter (SG records are processed in this study using a multi-scale analytical method and corrected for known effects to reduce noise, to study geodynamic phenomena using their gravimetric signatures. Continuous SG (GWR-C032 gravity and barometric data are decomposed into a series of intrinsic mode functions (IMFs using the ensemble empirical mode decomposition (EEMD method, which is proposed to alleviate some unresolved issues (the mode mixing problem and the end effect of the empirical mode decomposition (EMD. Further analysis of the variously scaled signals is based on a dyadic filter bank of the IMFs. The results indicate that removing the high-frequency IMFs can reduce the natural and man-made noise in the data, which are caused by electronic device noise, Earth background noise and the residual effects of pre-processing. The atmospheric admittances based on frequency changes are estimated from the gravity and the atmospheric pressure IMFs in various frequency bands. These time- and frequency-dependent admittance values can be used effectively to improve the atmospheric correction. Using the EEMD method as a filter, the long-period IMFs are extracted from the SG time-varying gravity signals spanning 7 years. The resulting gravity residuals are well correlated with the gravity effect caused by the _ polar motion after correcting for atmospheric effects.

  4. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  5. Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates

    Science.gov (United States)

    Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida

    2015-02-01

    This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.

  6. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    Science.gov (United States)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  7. Identification of sudden stiffness changes in the acceleration response of a bridge to moving loads using ensemble empirical mode decomposition

    Science.gov (United States)

    Aied, H.; González, A.; Cantero, D.

    2016-01-01

    The growth of heavy traffic together with aggressive environmental loads poses a threat to the safety of an aging bridge stock. Often, damage is only detected via visual inspection at a point when repairing costs can be quite significant. Ideally, bridge managers would want to identify a stiffness change as soon as possible, i.e., as it is occurring, to plan for prompt measures before reaching a prohibitive cost. Recent developments in signal processing techniques such as wavelet analysis and empirical mode decomposition (EMD) have aimed to address this need by identifying a stiffness change from a localised feature in the structural response to traffic. However, the effectiveness of these techniques is limited by the roughness of the road profile, the vehicle speed and the noise level. In this paper, ensemble empirical mode decomposition (EEMD) is applied by the first time to the acceleration response of a bridge model to a moving load with the purpose of capturing sudden stiffness changes. EEMD is more adaptive and appears to be better suited to non-linear signals than wavelets, and it reduces the mode mixing problem present in EMD. EEMD is tested in a variety of theoretical 3D vehicle-bridge interaction scenarios. Stiffness changes are successfully identified, even for small affected regions, relatively poor profiles, high vehicle speeds and significant noise. The latter is due to the ability of EEMD to separate high frequency components associated to sudden stiffness changes from other frequency components associated to the vehicle-bridge interaction system.

  8. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  9. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    Science.gov (United States)

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  10. The application of empirical mode decomposition for the enhancement of cardiotocograph signals

    International Nuclear Information System (INIS)

    Krupa, B N; Mohd Ali, M A; Zahedi, E

    2009-01-01

    Cardiotocograph (CTG) is widely used in everyday clinical practice for fetal surveillance, where it is used to record fetal heart rate (FHR) and uterine activity (UA). These two biosignals can be used for antepartum and intrapartum fetal monitoring and are, in fact, nonlinear and non-stationary. CTG recordings are often corrupted by artifacts such as missing beats in FHR, high-frequency noise in FHR and UA signals. In this paper, an empirical mode decomposition (EMD) method is applied on CTG signals. A recursive algorithm is first utilized to eliminate missing beats. High-frequency noise is reduced using EMD followed by the partial reconstruction (PAR) method, where the noise order is identified by a statistical method. The obtained signal enhancement from the proposed method is validated by comparing the resulting traces with the output obtained by applying classical signal processing methods such as Butterworth low-pass filtering, linear interpolation and a moving average filter on 12 CTG signals. Three obstetricians evaluated all 12 sets of traces and rated the proposed method, on average, 3.8 out of 5 on a scale of 1(lowest) to 5 (highest)

  11. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  12. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    Science.gov (United States)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  13. Completed Ensemble Empirical Mode Decomposition: a Robust Signal Processing Tool to Identify Sequence Strata

    Science.gov (United States)

    Purba, H.; Musu, J. T.; Diria, S. A.; Permono, W.; Sadjati, O.; Sopandi, I.; Ruzi, F.

    2018-03-01

    Well logging data provide many geological information and its trends resemble nonlinear or non-stationary signals. As long well log data recorded, there will be external factors can interfere or influence its signal resolution. A sensitive signal analysis is required to improve the accuracy of logging interpretation which it becomes an important thing to determine sequence stratigraphy. Complete Ensemble Empirical Mode Decomposition (CEEMD) is one of nonlinear and non-stationary signal analysis method which decomposes complex signal into a series of intrinsic mode function (IMF). Gamma Ray and Spontaneous Potential well log parameters decomposed into IMF-1 up to IMF-10 and each of its combination and correlation makes physical meaning identification. It identifies the stratigraphy and cycle sequence and provides an effective signal treatment method for sequence interface. This method was applied to BRK- 30 and BRK-13 well logging data. The result shows that the combination of IMF-5, IMF-6, and IMF-7 pattern represent short-term and middle-term while IMF-9 and IMF-10 represent the long-term sedimentation which describe distal front and delta front facies, and inter-distributary mouth bar facies, respectively. Thus, CEEMD clearly can determine the different sedimentary layer interface and better identification of the cycle of stratigraphic base level.

  14. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  15. A Study of Nonstationary Wind Effects on a Full-Scale Large Cooling Tower Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    X. X. Cheng

    2017-01-01

    Full Text Available Wind effects on structures obtained by field measurements are often found to be nonstationary, but related researches shared by the wind-engineering community are still limited. In this paper, empirical mode decomposition (EMD is applied to the nonstationary wind pressure time-history samples measured on an actual 167-meter high large cooling tower. It is found that the residue and some intrinsic mode functions (IMFs of low frequencies produced by EMD are responsible for the samples’ nonstationarity. Replacing the residue by the constant mean and subtracting the IMFs of low frequencies can help the nonstationary samples become stationary ones. A further step is taken to compare the loading characteristics extracted from the original nonstationary samples with those extracted from the processed stationary samples. Results indicate that nonstationarity effects on wind loads are notable in most cases. The passive wind tunnel simulation technique based on the assumption of stationarity is also examined, and it is found that the technique is basically conservative for use.

  16. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    Science.gov (United States)

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  17. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  18. A Novel Multiscale Ensemble Carbon Price Prediction Model Integrating Empirical Mode Decomposition, Genetic Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bangzhu Zhu

    2012-02-01

    Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.

  19. Effect of tidal triggering on seismicity in Taiwan revealed by the empirical mode decomposition method

    Directory of Open Access Journals (Sweden)

    H.-J. Chen

    2012-07-01

    Full Text Available The effect of tidal triggering on earthquake occurrence has been controversial for many years. This study considered earthquakes that occurred near Taiwan between 1973 and 2008. Because earthquake data are nonlinear and non-stationary, we applied the empirical mode decomposition (EMD method to analyze the temporal variations in the number of daily earthquakes to investigate the effect of tidal triggering. We compared the results obtained from the non-declustered catalog with those from two kinds of declustered catalogs and discuss the aftershock effect on the EMD-based analysis. We also investigated stacking the data based on in-phase phenomena of theoretical Earth tides with statistical significance tests. Our results show that the effects of tidal triggering, particularly the lunar tidal effect, can be extracted from the raw seismicity data using the approach proposed here. Our results suggest that the lunar tidal force is likely a factor in the triggering of earthquakes.

  20. Ultra-High-Speed Travelling Wave Protection of Transmission Line Using Polarity Comparison Principle Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available The traditional polarity comparison based travelling wave protection, using the initial wave information, is affected by initial fault angle, bus structure, and external fault. And the relationship between the magnitude and polarity of travelling wave is ignored. Because of the protection tripping and malfunction, the further application of this protection principle is affected. Therefore, this paper presents an ultra-high-speed travelling wave protection using integral based polarity comparison principle. After empirical mode decomposition of the original travelling wave, the first-order intrinsic mode function is used as protection object. Based on the relationship between the magnitude and polarity of travelling wave, this paper demonstrates the feasibility of using travelling wave magnitude which contains polar information as direction criterion. And the paper integrates the direction criterion in a period after fault to avoid wave head detection failure. Through PSCAD simulation with the typical 500 kV transmission system, the reliability and sensitivity of travelling wave protection were verified under different factors’ affection.

  1. Prediction of mean monthly river discharges in Colombia through Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    A. M. Carmona

    2015-04-01

    Full Text Available The hydro-climatology of Colombia exhibits strong natural variability at a broad range of time scales including: inter-decadal, decadal, inter-annual, annual, intra-annual, intra-seasonal, and diurnal. Diverse applied sectors rely on quantitative predictions of river discharges for operational purposes including hydropower generation, agriculture, human health, fluvial navigation, territorial planning and management, risk preparedness and mitigation, among others. Various methodologies have been used to predict monthly mean river discharges that are based on "Predictive Analytics", an area of statistical analysis that studies the extraction of information from historical data to infer future trends and patterns. Our study couples the Empirical Mode Decomposition (EMD with traditional methods, e.g. Autoregressive Model of Order 1 (AR1 and Neural Networks (NN, to predict mean monthly river discharges in Colombia, South America. The EMD allows us to decompose the historical time series of river discharges into a finite number of intrinsic mode functions (IMF that capture the different oscillatory modes of different frequencies associated with the inherent time scales coexisting simultaneously in the signal (Huang et al. 1998, Huang and Wu 2008, Rao and Hsu, 2008. Our predictive method states that it is easier and simpler to predict each IMF at a time and then add them up together to obtain the predicted river discharge for a certain month, than predicting the full signal. This method is applied to 10 series of monthly mean river discharges in Colombia, using calibration periods of more than 25 years, and validation periods of about 12 years. Predictions are performed for time horizons spanning from 1 to 12 months. Our results show that predictions obtained through the traditional methods improve when the EMD is used as a previous step, since errors decrease by up to 13% when the AR1 model is used, and by up to 18% when using Neural Networks is

  2. IMF-Slices for GPR Data Processing Using Variational Mode Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xuebing Zhang

    2018-03-01

    Full Text Available Using traditional time-frequency analysis methods, it is possible to delineate the time-frequency structures of ground-penetrating radar (GPR data. A series of applications based on time-frequency analysis were proposed for the GPR data processing and imaging. With respect to signal processing, GPR data are typically non-stationary, which limits the applications of these methods moving forward. Empirical mode decomposition (EMD provides alternative solutions with a fresh perspective. With EMD, GPR data are decomposed into a set of sub-components, i.e., the intrinsic mode functions (IMFs. However, the mode-mixing effect may also bring some negatives. To utilize the IMFs’ benefits, and avoid the negatives of the EMD, we introduce a new decomposition scheme termed variational mode decomposition (VMD for GPR data processing for imaging. Based on the decomposition results of the VMD, we propose a new method which we refer as “the IMF-slice”. In the proposed method, the IMFs are generated by the VMD trace by trace, and then each IMF is sorted and recorded into different profiles (i.e., the IMF-slices according to its center frequency. Using IMF-slices, the GPR data can be divided into several IMF-slices, each of which delineates a main vibration mode, and some subsurface layers and geophysical events can be identified more clearly. The effectiveness of the proposed method is tested using synthetic benchmark signals, laboratory data and the field dataset.

  3. Combination of Empirical Mode Decomposition Components of HRV Signals for Discriminating Emotional States

    Directory of Open Access Journals (Sweden)

    Ateke Goshvarpour

    2016-06-01

    Full Text Available Introduction Automatic human emotion recognition is one of the most interesting topics in the field of affective computing. However, development of a reliable approach with a reasonable recognition rate is a challenging task. The main objective of the present study was to propose a robust method for discrimination of emotional responses thorough examination of heart rate variability (HRV. In the present study, considering the non-stationary and non-linear characteristics of HRV, empirical mode decomposition technique was utilized as a feature extraction approach. Materials and Methods In order to induce the emotional states, images indicating four emotional states, i.e., happiness, peacefulness, sadness, and fearfulness were presented. Simultaneously, HRV was recorded in 47 college students. The signals were decomposed into some intrinsic mode functions (IMFs. For each IMF and different IMF combinations, 17 standard and non-linear parameters were extracted. Wilcoxon test was conducted to assess the difference between IMF parameters in different emotional states. Afterwards, a probabilistic neural network was used to classify the features into emotional classes. Results Based on the findings, maximum classification rates were achieved when all IMFs were fed into the classifier. Under such circumstances, the proposed algorithm could discriminate the affective states with sensitivity, specificity, and correct classification rate of 99.01%, 100%, and 99.09%, respectively. In contrast, the lowest discrimination rates were attained by IMF1 frequency and its combinations. Conclusion The high performance of the present approach indicated that the proposed method is applicable for automatic emotion recognition.

  4. Automated diagnosis of focal liver lesions using bidirectional empirical mode decomposition features.

    Science.gov (United States)

    Acharya, U Rajendra; Koh, Joel En Wei; Hagiwara, Yuki; Tan, Jen Hong; Gertych, Arkadiusz; Vijayananthan, Anushya; Yaakup, Nur Adura; Abdullah, Basri Johan Jeet; Bin Mohd Fabell, Mohd Kamil; Yeong, Chai Hong

    2018-03-01

    Liver is the heaviest internal organ of the human body and performs many vital functions. Prolonged cirrhosis and fatty liver disease may lead to the formation of benign or malignant lesions in this organ, and an early and reliable evaluation of these conditions can improve treatment outcomes. Ultrasound imaging is a safe, non-invasive, and cost-effective way of diagnosing liver lesions. However, this technique has limited performance in determining the nature of the lesions. This study initiates a computer-aided diagnosis (CAD) system to aid radiologists in an objective and more reliable interpretation of ultrasound images of liver lesions. In this work, we have employed radon transform and bi-directional empirical mode decomposition (BEMD) to extract features from the focal liver lesions. After which, the extracted features were subjected to particle swarm optimization (PSO) technique for the selection of a set of optimized features for classification. Our automated CAD system can differentiate normal, malignant, and benign liver lesions using machine learning algorithms. It was trained using 78 normal, 26 benign and 36 malignant focal lesions of the liver. The accuracy, sensitivity, and specificity of lesion classification were 92.95%, 90.80%, and 97.44%, respectively. The proposed CAD system is fully automatic as no segmentation of region-of-interest (ROI) is required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Denoising traffic collision data using ensemble empirical mode decomposition (EEMD) and its application for constructing continuous risk profile (CRP).

    Science.gov (United States)

    Kim, Nam-Seog; Chung, Koohong; Ahn, Seongchae; Yu, Jeong Whon; Choi, Keechoo

    2014-10-01

    Filtering out the noise in traffic collision data is essential in reducing false positive rates (i.e., requiring safety investigation of sites where it is not needed) and can assist government agencies in better allocating limited resources. Previous studies have demonstrated that denoising traffic collision data is possible when there exists a true known high collision concentration location (HCCL) list to calibrate the parameters of a denoising method. However, such a list is often not readily available in practice. To this end, the present study introduces an innovative approach for denoising traffic collision data using the Ensemble Empirical Mode Decomposition (EEMD) method which is widely used for analyzing nonlinear and nonstationary data. The present study describes how to transform the traffic collision data before the data can be decomposed using the EEMD method to obtain set of Intrinsic Mode Functions (IMFs) and residue. The attributes of the IMFs were then carefully examined to denoise the data and to construct Continuous Risk Profiles (CRPs). The findings from comparing the resulting CRP profiles with CRPs in which the noise was filtered out with two different empirically calibrated weighted moving window lengths are also documented, and the results and recommendations for future research are discussed. Published by Elsevier Ltd.

  6. Analyzing the locomotory gaitprint of Caenorhabditis elegans on the basis of empirical mode decomposition.

    Directory of Open Access Journals (Sweden)

    Li-Chun Lin

    Full Text Available The locomotory gait analysis of the microswimmer, Caenorhabditis elegans, is a commonly adopted approach for strain recognition and examination of phenotypic defects. Gait is also a visible behavioral expression of worms under external stimuli. This study developed an adaptive data analysis method based on empirical mode decomposition (EMD to reveal the biological cues behind intricate motion. The method was used to classify the strains of worms according to their gaitprints (i.e., phenotypic traits of locomotion. First, a norm of the locomotory pattern was created from the worm of interest. The body curvature of the worm was decomposed into four intrinsic mode functions (IMFs. A radar chart showing correlations between the predefined database and measured worm was then obtained by dividing each IMF into three parts, namely, head, mid-body, and tail. A comprehensive resemblance score was estimated after k-means clustering. Simulated data that use sinusoidal waves were generated to assess the feasibility of the algorithm. Results suggested that temporal frequency is the major factor in the process. In practice, five worm strains, including wild-type N2, TJ356 (zIs356, CL2070 (dvIs70, CB0061 (dpy-5, and CL2120 (dvIs14, were investigated. The overall classification accuracy of the gaitprint analyses of all the strains reached nearly 89%. The method can also be extended to classify some motor neuron-related locomotory defects of C. elegans in the same fashion.

  7. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  8. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    Science.gov (United States)

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  9. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    Directory of Open Access Journals (Sweden)

    David Camarena-Martinez

    2014-01-01

    Full Text Available Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE-based frequency estimator and a feed forward neural network (FFNN-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.

  10. Exercise muscle fatigue detection system implementation via wireless surface electromyography and empirical mode decomposition.

    Science.gov (United States)

    Chang, Kang-Ming; Liu, Shing-Hong; Wang, Jia-Jung; Cheng, Da-Chuan

    2013-01-01

    Surface electromyography (sEMG) is an important measurement for monitoring exercise and fitness. A wireless Bluetooth transmission sEMG measurement system with a sampling frequency of 2 KHz is developed. Traditional muscle fatigue is detected from the median frequency of the sEMG power spectrum. The regression slope of the linear regression of median frequency is an important muscle fatigue index. As fatigue increases, the power spectrum of the sEMG shifts toward lower frequencies. The goal of this study is to evaluate the sensitivity of empirical mode decomposition (EMD) quantifying the electrical manifestations of the local muscle fatigue during exercising in health people. We also compared this method with the raw data and discrete wavelet transform (DWT). Five male and five female volunteers participated. Each subject was asked to run on a multifunctional pedaled elliptical trainer for about 30 minutes, twice a week, and there were a total of six recording times for each subject with a wireless EMG recording system. The results show that sensitivity of the highest frequency component of EMD is better than the highest frequency component of DWT, and raw data.

  11. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    Science.gov (United States)

    Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281

  12. Fluorescence Intrinsic Characterization of Excitation-Emission Matrix Using Multi-Dimensional Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Tzu-Chien Hsiao

    2013-11-01

    Full Text Available Excitation-emission matrix (EEM fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes.

  13. Identifying the oil price-macroeconomy relationship. An empirical mode decomposition analysis of US data

    International Nuclear Information System (INIS)

    Oladosu, Gbadebo

    2009-01-01

    This paper employs the empirical mode decomposition (EMD) method to filter cyclical components of US quarterly gross domestic product (GDP) and quarterly average oil price (West Texas Intermediate - WTI). The method is adaptive and applicable to non-linear and non-stationary data. A correlation analysis of the resulting components is performed and examined for insights into the relationship between oil and the economy. Several components of this relationship are identified. However, the principal one is that the medium-run component of the oil price has a negative relationship with the main cyclical component of the GDP. In addition, weak correlations suggesting a lagging, demand-driven component and a long-run component of the relationship were also identified. Comparisons of these findings with significant oil supply disruption and recession dates were supportive. The study identifies a number of lessons applicable to recent oil market events, including the eventuality of persistent oil price and economic decline following a long oil price run-up. In addition, it was found that oil market related exogenous events are associated with short- to medium-run price implications regardless of whether they lead to actual supply losses. (author)

  14. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  15. Detection of the ice assertion on aircraft using empirical mode decomposition enhanced by multi-objective optimization

    Science.gov (United States)

    Bagherzadeh, Seyed Amin; Asadi, Davood

    2017-05-01

    In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.

  16. Detrending with Empirical Mode Decomposition (DEMD): Theory, Evaluation, and Application

    Science.gov (United States)

    Bolch, Michael Adam

    Land-surface heterogeneity (LSH) at different scales has significant influence on atmospheric boundary layer (ABL) buoyant and shear turbulence generation and transfers of water, carbon and heat. The extent of proliferation of this influence into larger-scale circulations and atmospheric structures is a topic continually investigated in experimental and numerical studies, in many cases with the hopes of improving land-atmosphere parameterizations for modeling purposes. The blending height is a potential metric for the vertical propagation of LSH effects into the ABL, and has been the subject of study for several decades. Proper assessment of the efficacy of blending height theory invites the combination of observations throughout ABLs above different LSH scales with model simulations of the observed ABL and LSH conditions. The central goal of this project is to develop an apt and thoroughly scrutinized method for procuring ABL observations that are accurately detrended and justifiably relevant for such a study, referred to here as Detrending with Empirical Mode Decomposition (DEMD). The Duke University helicopter observation platform (HOP) provides ABL data [wind (u, v, and w), temperature ( T), moisture (q), and carbon dioxide (CO 2)] at a wide range of altitudes, especially in the lower ABL, where LSH effects are most prominent, and where other aircraft-based platforms cannot fly. Also, lower airspeeds translate to higher resolution of the scalars and fluxes needed to evaluate blending height theory. To confirm noninterference of the main rotor downwash with the HOP sensors, and also to identify optimal airspeeds, analytical, numerical, and observational studies are presented. Analytical analysis clears the main rotor downwash from the HOP nose at airspeeds above 10 m s-1. Numerical models find an acceptable range from 20-40 m s-1, due to a growing compressed air preceding the HOP nose. The first observational study finds no impact of different HOP airspeeds on

  17. Discrimination between Newly Formed and Aged Thrombi Using Empirical Mode Decomposition of Ultrasound B-Scan Image

    Directory of Open Access Journals (Sweden)

    Jui Fang

    2015-01-01

    Full Text Available Ultrasound imaging is a first-line diagnostic method for screening the thrombus. During thrombus aging, the proportion of red blood cells (RBCs in the thrombus decreases and therefore the signal intensity of B-scan can be used to detect the thrombus age. To avoid the effect of system gain on the measurements, this study proposed using the empirical mode decomposition (EMD of ultrasound image as a strategy to classify newly formed and aged thrombi. Porcine blood samples were used for the in vitro induction of fresh and aged thrombi (at hematocrits of 40%. Each thrombus was imaged using an ultrasound scanner at different gains (15, 20, and 30 dB. Then, EMD of ultrasound signals was performed to obtain the first and second intrinsic mode functions (IMFs, which were further used to calculate the IMF-based echogenicity ratio (IER. The results showed that the performance of using signal amplitude of B-scan to reflect the thrombus age depends on gain. However, the IER is less affected by the gain in discriminating between fresh and aged thrombi. In the future, ultrasound B-scan combined with the EMD may be used to identify the thrombus age for the establishment of thrombolytic treatment planning.

  18. Single-Trial Classification of Bistable Perception by Integrating Empirical Mode Decomposition, Clustering, and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Hualou Liang

    2008-04-01

    Full Text Available We propose an empirical mode decomposition (EMD- based method to extract features from the multichannel recordings of local field potential (LFP, collected from the middle temporal (MT visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM perception. The feature extraction approach consists of three stages. First, we employ EMD to decompose nonstationary single-trial time series into narrowband components called intrinsic mode functions (IMFs with time scales dependent on the data. Second, we adopt unsupervised K-means clustering to group the IMFs and residues into several clusters across all trials and channels. Third, we use the supervised common spatial patterns (CSP approach to design spatial filters for the clustered spatiotemporal signals. We exploit the support vector machine (SVM classifier on the extracted features to decode the reported perception on a single-trial basis. We demonstrate that the CSP feature of the cluster in the gamma frequency band outperforms the features in other frequency bands and leads to the best decoding performance. We also show that the EMD-based feature extraction can be useful for evoked potential estimation. Our proposed feature extraction approach may have potential for many applications involving nonstationary multivariable time series such as brain-computer interfaces (BCI.

  19. Removal of artifacts in knee joint vibroarthrographic signals using ensemble empirical mode decomposition and detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Wu, Yunfeng; Yang, Shanshan; Zheng, Fang; Cai, Suxian; Lu, Meng; Wu, Meihong

    2014-01-01

    High-resolution knee joint vibroarthrographic (VAG) signals can help physicians accurately evaluate the pathological condition of a degenerative knee joint, in order to prevent unnecessary exploratory surgery. Artifact cancellation is vital to preserve the quality of VAG signals prior to further computer-aided analysis. This paper describes a novel method that effectively utilizes ensemble empirical mode decomposition (EEMD) and detrended fluctuation analysis (DFA) algorithms for the removal of baseline wander and white noise in VAG signal processing. The EEMD method first successively decomposes the raw VAG signal into a set of intrinsic mode functions (IMFs) with fast and low oscillations, until the monotonic baseline wander remains in the last residue. Then, the DFA algorithm is applied to compute the fractal scaling index parameter for each IMF, in order to identify the anti-correlation and the long-range correlation components. Next, the DFA algorithm can be used to identify the anti-correlated and the long-range correlated IMFs, which assists in reconstructing the artifact-reduced VAG signals. Our experimental results showed that the combination of EEMD and DFA algorithms was able to provide averaged signal-to-noise ratio (SNR) values of 20.52 dB (standard deviation: 1.14 dB) and 20.87 dB (standard deviation: 1.89 dB) for 45 normal signals in healthy subjects and 20 pathological signals in symptomatic patients, respectively. The combination of EEMD and DFA algorithms can ameliorate the quality of VAG signals with great SNR improvements over the raw signal, and the results were also superior to those achieved by wavelet matching pursuit decomposition and time-delay neural filter. (paper)

  20. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Carlos Reyes-Garcia

    2013-08-01

    Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  1. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    Science.gov (United States)

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  2. Improved Prediction of Preterm Delivery Using Empirical Mode Decomposition Analysis of Uterine Electromyography Signals.

    Directory of Open Access Journals (Sweden)

    Peng Ren

    Full Text Available Preterm delivery increases the risk of infant mortality and morbidity, and therefore developing reliable methods for predicting its likelihood are of great importance. Previous work using uterine electromyography (EMG recordings has shown that they may provide a promising and objective way for predicting risk of preterm delivery. However, to date attempts at utilizing computational approaches to achieve sufficient predictive confidence, in terms of area under the curve (AUC values, have not achieved the high discrimination accuracy that a clinical application requires. In our study, we propose a new analytical approach for assessing the risk of preterm delivery using EMG recordings which firstly employs Empirical Mode Decomposition (EMD to obtain their Intrinsic Mode Functions (IMF. Next, the entropy values of both instantaneous amplitude and instantaneous frequency of the first ten IMF components are computed in order to derive ratios of these two distinct components as features. Discrimination accuracy of this approach compared to those proposed previously was then calculated using six differently representative classifiers. Finally, three different electrode positions were analyzed for their prediction accuracy of preterm delivery in order to establish which uterine EMG recording location was optimal signal data. Overall, our results show a clear improvement in prediction accuracy of preterm delivery risk compared with previous approaches, achieving an impressive maximum AUC value of 0.986 when using signals from an electrode positioned below the navel. In sum, this provides a promising new method for analyzing uterine EMG signals to permit accurate clinical assessment of preterm delivery risk.

  3. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  4. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  5. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    Science.gov (United States)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  6. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  7. Combination of canonical correlation analysis and empirical mode decomposition applied to denoising the labor electrohysterogram.

    Science.gov (United States)

    Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2011-09-01

    The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.

  8. Rolling Element Bearing Performance Degradation Assessment Using Variational Mode Decomposition and Gath-Geva Clustering Time Series Segmentation

    Directory of Open Access Journals (Sweden)

    Yaolong Li

    2017-01-01

    Full Text Available By focusing on the issue of rolling element bearing (REB performance degradation assessment (PDA, a solution based on variational mode decomposition (VMD and Gath-Geva clustering time series segmentation (GGCTSS has been proposed. VMD is a new decomposition method. Since it is different from the recursive decomposition method, for example, empirical mode decomposition (EMD, local mean decomposition (LMD, and local characteristic-scale decomposition (LCD, VMD needs a priori parameters. In this paper, we will propose a method to optimize the parameters in VMD, namely, the number of decomposition modes and moderate bandwidth constraint, based on genetic algorithm. Executing VMD with the acquired parameters, the BLIMFs are obtained. By taking the envelope of the BLIMFs, the sensitive BLIMFs are selected. And then we take the amplitude of the defect frequency (ADF as a degradative feature. To get the performance degradation assessment, we are going to use the method called Gath-Geva clustering time series segmentation. Afterwards, the method is carried out by two pieces of run-to-failure data. The results indicate that the extracted feature could depict the process of degradation precisely.

  9. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  10. Automatic screening of obstructive sleep apnea from the ECG based on empirical mode decomposition and wavelet analysis

    International Nuclear Information System (INIS)

    Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T

    2010-01-01

    This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident

  11. Temporal structure of neuronal population oscillations with empirical model decomposition

    International Nuclear Information System (INIS)

    Li Xiaoli

    2006-01-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation

  12. Empirical Mode Decomposition of Geophysical Well-log Data of Bombay Offshore Basin, Mumbai, India

    Science.gov (United States)

    Siddharth Gairola, Gaurav; Chandrasekhar, Enamundram

    2016-04-01

    Geophysical well-log data manifest the nonlinear behaviour of their respective physical properties of the heterogeneous subsurface layers as a function of depth. Therefore, nonlinear data analysis techniques must be implemented, to quantify the degree of heterogeneity in the subsurface lithologies. One such nonlinear data adaptive technique is empirical mode decomposition (EMD) technique, which facilitates to decompose the data into oscillatory signals of different wavelengths called intrinsic mode functions (IMF). In the present study EMD has been applied to gamma-ray log and neutron porosity log of two different wells: Well B and Well C located in the western offshore basin of India to perform heterogeneity analysis and compare the results with those obtained by multifractal studies of the same data sets. By establishing a relationship between the IMF number (m) and the mean wavelength associated with each IMF (Im), a heterogeneity index (ρ) associated with subsurface layers can be determined using the relation, Im=kρm, where 'k' is a constant. The ρ values bear an inverse relation with the heterogeneity of the subsurface: smaller ρ values designate higher heterogeneity and vice-versa. The ρ values estimated for different limestone payzones identified in the wells clearly show that Well C has higher degree of heterogeneity than Well B. This correlates well with the estimated Vshale values for the limestone reservoir zone showing higher shale content in Well C than Well B. The ρ values determined for different payzones of both wells will be used to quantify the degree of heterogeneity in different wells. The multifractal behaviour of each IMF of both the logs of both the wells will be compared with one another and discussed on the lines of their heterogeneity indices.

  13. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  14. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  15. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Xike Zhang

    2018-05-01

    Full Text Available Daily land surface temperature (LST forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD coupled with Machine Learning (ML algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs and a single residue item. Then, the Partial Autocorrelation Function (PACF is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE, Mean Absolute Error (MAE, Mean Absolute Percentage Error (MAPE, Root Mean Square Error (RMSE, Pearson Correlation Coefficient (CC and Nash-Sutcliffe Coefficient of Efficiency (NSCE. To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN, LSTM and Empirical Mode Decomposition (EMD coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other

  16. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five

  17. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  18. Assessment of autonomic nervous system by using empirical mode decomposition-based reflection wave analysis during non-stationary conditions

    International Nuclear Information System (INIS)

    Chang, C C; Hsiao, T C; Kao, S C; Hsu, H Y

    2014-01-01

    Arterial blood pressure (ABP) is an important indicator of cardiovascular circulation and presents various intrinsic regulations. It has been found that the intrinsic characteristics of blood vessels can be assessed quantitatively by ABP analysis (called reflection wave analysis (RWA)), but conventional RWA is insufficient for assessment during non-stationary conditions, such as the Valsalva maneuver. Recently, a novel adaptive method called empirical mode decomposition (EMD) was proposed for non-stationary data analysis. This study proposed a RWA algorithm based on EMD (EMD-RWA). A total of 51 subjects participated in this study, including 39 healthy subjects and 12 patients with autonomic nervous system (ANS) dysfunction. The results showed that EMD-RWA provided a reliable estimation of reflection time in baseline and head-up tilt (HUT). Moreover, the estimated reflection time is able to assess the ANS function non-invasively, both in normal, healthy subjects and in the patients with ANS dysfunction. EMD-RWA provides a new approach for reflection time estimation in non-stationary conditions, and also helps with non-invasive ANS assessment. (paper)

  19. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient.

    Science.gov (United States)

    Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing

    2017-12-26

    As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  20. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Yuxing Li

    2017-12-01

    Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  1. Analysis of the Nonlinear Trends and Non-Stationary Oscillations of Regional Precipitation in Xinjiang, Northwestern China, Using Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2016-03-01

    Full Text Available Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960–2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD. Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years and inter-decadal scale (quasi-12 and quasi-23 years. Moreover, the 2–3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements.

  2. Analysis of the Nonlinear Trends and Non-Stationary Oscillations of Regional Precipitation in Xinjiang, Northwestern China, Using Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Guo, Bin; Chen, Zhongsheng; Guo, Jinyun; Liu, Feng; Chen, Chuanfa; Liu, Kangli

    2016-03-21

    Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960-2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD). Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years) and inter-decadal scale (quasi-12 and quasi-23 years). Moreover, the 2-3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements.

  3. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. EEG artifacts reduction by multivariate empirical mode decomposition and multiscale entropy for monitoring depth of anaesthesia during surgery.

    Science.gov (United States)

    Liu, Quan; Chen, Yi-Feng; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2017-08-01

    Electroencephalography (EEG) has been widely utilized to measure the depth of anaesthesia (DOA) during operation. However, the EEG signals are usually contaminated by artifacts which have a consequence on the measured DOA accuracy. In this study, an effective and useful filtering algorithm based on multivariate empirical mode decomposition and multiscale entropy (MSE) is proposed to measure DOA. Mean entropy of MSE is used as an index to find artifacts-free intrinsic mode functions. The effect of different levels of artifacts on the performances of the proposed filtering is analysed using simulated data. Furthermore, 21 patients' EEG signals are collected and analysed using sample entropy to calculate the complexity for monitoring DOA. The correlation coefficients of entropy and bispectral index (BIS) results show 0.14 ± 0.30 and 0.63 ± 0.09 before and after filtering, respectively. Artificial neural network (ANN) model is used for range mapping in order to correlate the measurements with BIS. The ANN method results show strong correlation coefficient (0.75 ± 0.08). The results in this paper verify that entropy values and BIS have a strong correlation for the purpose of DOA monitoring and the proposed filtering method can effectively filter artifacts from EEG signals. The proposed method performs better than the commonly used wavelet denoising method. This study provides a fully adaptive and automated filter for EEG to measure DOA more accuracy and thus reduce risk related to maintenance of anaesthetic agents.

  5. Recognizing of stereotypic patterns in epileptic EEG using empirical modes and wavelets

    Science.gov (United States)

    Grubov, V. V.; Sitnikova, E.; Pavlov, A. N.; Koronovskii, A. A.; Hramov, A. E.

    2017-11-01

    Epileptic activity in the form of spike-wave discharges (SWD) appears in the electroencephalogram (EEG) during absence seizures. This paper evaluates two approaches for detecting stereotypic rhythmic activities in EEG, i.e., the continuous wavelet transform (CWT) and the empirical mode decomposition (EMD). The CWT is a well-known method of time-frequency analysis of EEG, whereas EMD is a relatively novel approach for extracting signal's waveforms. A new method for pattern recognition based on combination of CWT and EMD is proposed. It was found that this combined approach resulted to the sensitivity of 86.5% and specificity of 92.9% for sleep spindles and 97.6% and 93.2% for SWD, correspondingly. Considering strong within- and between-subjects variability of sleep spindles, the obtained efficiency in their detection was high in comparison with other methods based on CWT. It is concluded that the combination of a wavelet-based approach and empirical modes increases the quality of automatic detection of stereotypic patterns in rat's EEG.

  6. A Hybrid Forecasting Model Based on Empirical Mode Decomposition and the Cuckoo Search Algorithm: A Case Study for Power Load

    Directory of Open Access Journals (Sweden)

    Jiani Heng

    2016-01-01

    Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.

  7. Extraction Method of Driver’s Mental Component Based on Empirical Mode Decomposition and Approximate Entropy Statistic Characteristic in Vehicle Running State

    Directory of Open Access Journals (Sweden)

    Shuan-Feng Zhao

    2017-01-01

    Full Text Available In the driver fatigue monitoring technology, the essence is to capture and analyze the driver behavior information, such as eyes, face, heart, and EEG activity during driving. However, ECG and EEG monitoring are limited by the installation electrodes and are not commercially available. The most common fatigue detection method is the analysis of driver behavior, that is, to determine whether the driver is tired by recording and analyzing the behavior characteristics of steering wheel and brake. The driver usually adjusts his or her actions based on the observed road conditions. Obviously the road path information is directly contained in the vehicle driving state; if you want to judge the driver’s driving behavior by vehicle driving status information, the first task is to remove the road information from the vehicle driving state data. Therefore, this paper proposes an effective intrinsic mode function selection method for the approximate entropy of empirical mode decomposition considering the characteristics of the frequency distribution of road and vehicle information and the unsteady and nonlinear characteristics of the driver closed-loop driving system in vehicle driving state data. The objective is to extract the effective component of the driving behavior information and to weaken the road information component. Finally the effectiveness of the proposed method is verified by simulating driving experiments.

  8. Dynamic mode decomposition for plasma diagnostics and validation

    Science.gov (United States)

    Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.

    2018-05-01

    We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.

  9. Empirical mode decomposition and long-range correlation analysis of sunspot time series

    International Nuclear Information System (INIS)

    Zhou, Yu; Leung, Yee

    2010-01-01

    Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the

  10. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  11. Quasi-bivariate variational mode decomposition as a tool of scale analysis in wall-bounded turbulence

    Science.gov (United States)

    Wang, Wenkang; Pan, Chong; Wang, Jinjun

    2018-01-01

    The identification and separation of multi-scale coherent structures is a critical task for the study of scale interaction in wall-bounded turbulence. Here, we propose a quasi-bivariate variational mode decomposition (QB-VMD) method to extract structures with various scales from instantaneous two-dimensional (2D) velocity field which has only one primary dimension. This method is developed from the one-dimensional VMD algorithm proposed by Dragomiretskiy and Zosso (IEEE Trans Signal Process 62:531-544, 2014) to cope with a quasi-2D scenario. It poses the feature of length-scale bandwidth constraint along the decomposed dimension, together with the central frequency re-balancing along the non-decomposed dimension. The feasibility of this method is tested on both a synthetic flow field and a turbulent boundary layer at moderate Reynolds number (Re_{τ } = 3458) measured by 2D particle image velocimetry (PIV). Some other popular scale separation tools, including pseudo-bi-dimensional empirical mode decomposition (PB-EMD), bi-dimensional EMD (B-EMD) and proper orthogonal decomposition (POD), are also tested for comparison. Among all these methods, QB-VMD shows advantages in both scale characterization and energy recovery. More importantly, the mode mixing problem, which degrades the performance of EMD-based methods, is avoided or minimized in QB-VMD. Finally, QB-VMD analysis of the wall-parallel plane in the log layer (at y/δ = 0.12) of the studied turbulent boundary layer shows the coexistence of large- or very large-scale motions (LSMs or VLSMs) and inner-scaled structures, which can be fully decomposed in both physical and spectral domains.

  12. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  13. Non-linear multivariate and multiscale monitoring and signal denoising strategy using Kernel Principal Component Analysis combined with Ensemble Empirical Mode Decomposition method

    Science.gov (United States)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2011-10-01

    The article presents a novel non-linear multivariate and multiscale statistical process monitoring and signal denoising method which combines the strengths of the Kernel Principal Component Analysis (KPCA) non-linear multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD) to handle multiscale system dynamics. The proposed method which enables us to cope with complex even severe non-linear systems with a wide dynamic range was named the EEMD-based multiscale KPCA (EEMD-MSKPCA). The method is quite general in nature and could be used in different areas for various tasks even without any really deep understanding of the nature of the system under consideration. Its efficiency was first demonstrated by an illustrative example, after which the applicability for the task of bearing fault detection, diagnosis and signal denosing was tested on simulated as well as actual vibration and acoustic emission (AE) signals measured on purpose-built large-size low-speed bearing test stand. The positive results obtained indicate that the proposed EEMD-MSKPCA method provides a promising tool for tackling non-linear multiscale data which present a convolved picture of many events occupying different regions in the time-frequency plane.

  14. Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors

    Science.gov (United States)

    Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea

    2018-03-01

    In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.

  15. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  16. Identification of relationships between climate indices and long-term precipitation in South Korea using ensemble empirical mode decomposition

    Science.gov (United States)

    Kim, Taereem; Shin, Ju-Young; Kim, Sunghun; Heo, Jun-Haeng

    2018-02-01

    Climate indices characterize climate systems and may identify important indicators for long-term precipitation, which are driven by climate interactions in atmosphere-ocean circulation. In this study, we investigated the climate indices that are effective indicators of long-term precipitation in South Korea, and examined their relationships based on statistical methods. Monthly total precipitation was collected from a total of 60 meteorological stations, and they were decomposed by ensemble empirical mode decomposition (EEMD) to identify the inherent oscillating patterns or cycles. Cross-correlation analysis and stepwise variable selection were employed to select the significant climate indices at each station. The climate indices that affect the monthly precipitation in South Korea were identified based on the selection frequencies of the selected indices at all stations. The NINO12 indices with four- and ten-month lags and AMO index with no lag were identified as indicators of monthly precipitation in South Korea. Moreover, they indicate meaningful physical information (e.g. periodic oscillations and long-term trend) inherent in the monthly precipitation. The NINO12 indices with four- and ten- month lags was a strong indicator representing periodic oscillations in monthly precipitation. In addition, the long-term trend of the monthly precipitation could be explained by the AMO index. A multiple linear regression model was constructed to investigate the influences of the identified climate indices on the prediction of monthly precipitation. Three identified climate indices successfully explained the monthly precipitation in the winter dry season. Compared to the monthly precipitation in coastal areas, the monthly precipitation in inland areas showed stronger correlation to the identified climate indices.

  17. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    Science.gov (United States)

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  18. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    Directory of Open Access Journals (Sweden)

    Chong Shen

    2016-05-01

    Full Text Available The different noise components in a dual-mass micro-electromechanical system (MEMS gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN, electronic-thermal noise (ETN, flicker noise (FN and Coriolis signal in-phase noise (IPN. The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD and time-frequency peak filtering (TFPF. There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  19. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    International Nuclear Information System (INIS)

    Kowal, Grzegorz; Lazarian, A.

    2010-01-01

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.

  20. Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients

    Directory of Open Access Journals (Sweden)

    Vlasta Bari

    2014-09-01

    Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

  1. Proper Orthogonal Decomposition and Dynamic Mode Decomposition in the Right Ventricle after Repair of Tetralogy of Fallot

    Science.gov (United States)

    Mikhail, Amanda; Kadem, Lyes; di Labbio, Giuseppe

    2017-11-01

    Tetralogy of Fallot accounts for 5% of all cyanotic congenital heart defects, making it the most predominant today. Approximately 1660 cases per year are seen in the United States alone. Once repaired at a very young age, symptoms such as pulmonary valve regurgitation seem to arise two to three decades after the initial operation. Currently, not much is understood about the blood flow in the right ventricle of the heart when regurgitation is present. In this study, the interaction between the diastolic interventricular flow and the regurgitating pulmonary valve are investigated. This experimental work aims to simulate and characterize this detrimental flow in a right heart simulator using time-resolved particle image velocimetry. Seven severities of regurgitation were simulated. Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) revealed intricate coherent flow structures. With regurgitation severity, the modal energies from POD are more distributed among the modes while DMD reveals more unstable modes. This study can contribute to the further investigation of the detrimental effects of right ventricle regurgitation.

  2. Application of Multivariate Empirical Mode Decomposition and Sample Entropy in EEG Signals via Artificial Neural Networks for Interpreting Depth of Anesthesia

    Directory of Open Access Journals (Sweden)

    Jiann-Shing Shieh

    2013-08-01

    Full Text Available EEG (Electroencephalography signals can express the human awareness activities and consequently it can indicate the depth of anesthesia. On the other hand, Bispectral-index (BIS is often used as an indicator to assess the depth of anesthesia. This study is aimed at using an advanced signal processing method to analyze EEG signals and compare them with existing BIS indexes from a commercial product (i.e., IntelliVue MP60 BIS module. Multivariate empirical mode decomposition (MEMD algorithm is utilized to filter the EEG signals. A combination of two MEMD components (IMF2 + IMF3 is used to express the raw EEG. Then, sample entropy algorithm is used to calculate the complexity of the patients’ EEG signal. Furthermore, linear regression and artificial neural network (ANN methods were used to model the sample entropy using BIS index as the gold standard. ANN can produce better target value than linear regression. The correlation coefficient is 0.790 ± 0.069 and MAE is 8.448 ± 1.887. In conclusion, the area under the receiver operating characteristic (ROC curve (AUC of sample entropy value using ANN and MEMD is 0.969 ± 0.028 while the AUC of sample entropy value without filter is 0.733 ± 0.123. It means the MEMD method can filter out noise of the brain waves, so that the sample entropy of EEG can be closely related to the depth of anesthesia. Therefore, the resulting index can be adopted as the reference for the physician, in order to reduce the risk of surgery.

  3. Phase space interrogation of the empirical response modes for seismically excited structures

    Science.gov (United States)

    Paul, Bibhas; George, Riya C.; Mishra, Sudib K.

    2017-07-01

    Conventional Phase Space Interrogation (PSI) for structural damage assessment relies on exciting the structure with low dimensional chaotic waveform, thereby, significantly limiting their applicability to large structures. The PSI technique is presently extended for structure subjected to seismic excitations. The high dimensionality of the phase space for seismic response(s) are overcome by the Empirical Mode Decomposition (EMD), decomposing the responses to a number of intrinsic low dimensional oscillatory modes, referred as Intrinsic Mode Functions (IMFs). Along with their low dimensionality, a few IMFs, retain sufficient information of the system dynamics to reflect the damage induced changes. The mutually conflicting nature of low-dimensionality and the sufficiency of dynamic information are taken care by the optimal choice of the IMF(s), which is shown to be the third/fourth IMFs. The optimal IMF(s) are employed for the reconstruction of the Phase space attractor following Taken's embedding theorem. The widely referred Changes in Phase Space Topology (CPST) feature is then employed on these Phase portrait(s) to derive the damage sensitive feature, referred as the CPST of the IMFs (CPST-IMF). The legitimacy of the CPST-IMF is established as a damage sensitive feature by assessing its variation with a number of damage scenarios benchmarked in the IASC-ASCE building. The damage localization capability, remarkable tolerance to noise contamination and the robustness under different seismic excitations of the feature are demonstrated.

  4. A non-destructive surface burn detection method for ferrous metals based on acoustic emission and ensemble empirical mode decomposition: from laser simulation to grinding process

    International Nuclear Information System (INIS)

    Yang, Zhensheng; Wu, Haixi; Yu, Zhonghua; Huang, Youfang

    2014-01-01

    Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)

  5. Analysis of microvascular perfusion with multi-dimensional complete ensemble empirical mode decomposition with adaptive noise algorithm: Processing of laser speckle contrast images recorded in healthy subjects, at rest and during acetylcholine stimulation.

    Science.gov (United States)

    Humeau-Heurtier, Anne; Marche, Pauline; Dubois, Severine; Mahe, Guillaume

    2015-01-01

    Laser speckle contrast imaging (LSCI) is a full-field imaging modality to monitor microvascular blood flow. It is able to give images with high temporal and spatial resolutions. However, when the skin is studied, the interpretation of the bidimensional data may be difficult. This is why an averaging of the perfusion values in regions of interest is often performed and the result is followed in time, reducing the data to monodimensional time series. In order to avoid such a procedure (that leads to a loss of the spatial resolution), we propose to extract patterns from LSCI data and to compare these patterns for two physiological states in healthy subjects: at rest and at the peak of acetylcholine-induced perfusion peak. For this purpose, the recent multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) algorithm is applied to LSCI data. The results show that the intrinsic mode functions and residue given by MCEEMDAN show different patterns for the two physiological states. The images, as bidimensional data, can therefore be processed to reveal microvascular perfusion patterns, hidden in the images themselves. This work is therefore a feasibility study before analyzing data in patients with microvascular dysfunctions.

  6. Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD and Its Application to the Extraction of Bamboo Forest

    Directory of Open Access Journals (Sweden)

    Guang Liu

    2016-12-01

    Full Text Available Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than

  7. Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.

    Science.gov (United States)

    Ashley, Laura; Armitage, Gerry

    2010-12-01

    To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.

  8. Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition

    Science.gov (United States)

    Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato

    2018-05-01

    We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.

  9. Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques

    Science.gov (United States)

    2018-04-30

    Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Attn: Code 5596 4555 Overlook Avenue, SW Washington, D.C. 20375-5320 E-mail: reports@library.nrl.navy.mil Defense Technical Information Center

  10. Instantaneous 3D EEG Signal Analysis Based on Empirical Mode Decomposition and the Hilbert–Huang Transform Applied to Depth of Anaesthesia

    Directory of Open Access Journals (Sweden)

    Mu-Tzu Shih

    2015-02-01

    Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore

  11. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.

    2014-01-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must

  12. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  13. Using dynamic mode decomposition for real-time background/foreground separation in video

    Science.gov (United States)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven; Fu, Xing; Pendergrass, Seth

    2017-06-06

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  14. The complexity of standing postural control in older adults: a modified detrended fluctuation analysis based upon the empirical mode decomposition algorithm.

    Directory of Open Access Journals (Sweden)

    Junhong Zhou

    Full Text Available Human aging into senescence diminishes the capacity of the postural control system to adapt to the stressors of everyday life. Diminished adaptive capacity may be reflected by a loss of the fractal-like, multiscale complexity within the dynamics of standing postural sway (i.e., center-of-pressure, COP. We therefore studied the relationship between COP complexity and adaptive capacity in 22 older and 22 younger healthy adults. COP magnitude dynamics were assessed from raw data during quiet standing with eyes open and closed, and complexity was quantified with a new technique termed empirical mode decomposition embedded detrended fluctuation analysis (EMD-DFA. Adaptive capacity of the postural control system was assessed with the sharpened Romberg test. As compared to traditional DFA, EMD-DFA more accurately identified trends in COP data with intrinsic scales and produced short and long-term scaling exponents (i.e., α(Short, α(Long with greater reliability. The fractal-like properties of COP fluctuations were time-scale dependent and highly complex (i.e., α(Short values were close to one over relatively short time scales. As compared to younger adults, older adults demonstrated lower short-term COP complexity (i.e., greater α(Short values in both visual conditions (p>0.001. Closing the eyes decreased short-term COP complexity, yet this decrease was greater in older compared to younger adults (p<0.001. In older adults, those with higher short-term COP complexity exhibited better adaptive capacity as quantified by Romberg test performance (r(2 = 0.38, p<0.001. These results indicate that an age-related loss of COP complexity of magnitude series may reflect a clinically important reduction in postural control system functionality as a new biomarker.

  15. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    Science.gov (United States)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  16. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  17. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  18. Mode decomposition for a synchronous state and its applications

    International Nuclear Information System (INIS)

    Xiong Xiaohua; Wang Junwei; Zhang Yanbin; Zhou Tianshou

    2007-01-01

    Synchronization of coupled dynamical systems including periodic and chaotic systems is investigated both anlaytically and numerically. A novel method, mode decomposition, of treating the stability of a synchronous state is proposed based on the Floquet theory. A rigorous criterion is then derived, which can be applied to arbitrary coupled systems. Two typical numerical examples: coupled Van der Pol systems (corresponding to the case of coupled periodic oscillators) and coupled Lorenz systems (corresponding to the case of chaotic systems) are used to demonstrate the theoretical analysis

  19. Multifractal features of EUA and CER futures markets by using multifractal detrended fluctuation analysis based on empirical model decomposition

    International Nuclear Information System (INIS)

    Cao, Guangxi; Xu, Wei

    2016-01-01

    Basing on daily price data of carbon emission rights in futures markets of Certified Emission Reduction (CER) and European Union Allowances (EUA), we analyze the multiscale characteristics of the markets by using empirical mode decomposition (EMD) and multifractal detrended fluctuation analysis (MFDFA) based on EMD. The complexity of the daily returns of CER and EUA futures markets changes with multiple time scales and multilayered features. The two markets also exhibit clear multifractal characteristics and long-range correlation. We employ shuffle and surrogate approaches to analyze the origins of multifractality. The long-range correlations and fat-tail distributions significantly contribute to multifractality. Furthermore, we analyze the influence of high returns on multifractality by using threshold method. The multifractality of the two futures markets is related to the presence of high values of returns in the price series.

  20. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  1. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry I.

    2017-12-08

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  2. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry; Kasimov, Aslan R.

    2018-01-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  3. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry

    2018-03-20

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  4. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    Science.gov (United States)

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  5. Choice of Foreign Market Entry Mode - Cognitions from Empirical and Theoretical Studies

    OpenAIRE

    Zhao, Xuemin; Decker, Reinhold

    2004-01-01

    This paper analyzes critically five basic theories on market entry mode decision with respect to existing strengths and weaknesses and the results of corresponding empirical studies. Starting from conflictions both in theories and empirical studies dealing with the entry mode choice problem we motivate a significant need of further research in this important area of international marketing. Furthermore we provide implications for managers in practice and outline emerging trends in market entr...

  6. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    Science.gov (United States)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  7. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Displacement prediction of Baijiabao landslide based on empirical mode decomposition and long short-term memory neural network in Three Gorges area, China

    Science.gov (United States)

    Xu, Shiluo; Niu, Ruiqing

    2018-02-01

    Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even

  9. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    Science.gov (United States)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  10. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  11. Linear dynamical modes as new variables for data-driven ENSO forecast

    Science.gov (United States)

    Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen

    2018-05-01

    A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.

  12. Phase synchronization in a two-mode solid state laser: Periodic modulations with the second relaxation oscillation frequency of the laser output

    International Nuclear Information System (INIS)

    Hsu, Tzu-Fang; Jao, Kuan-Hsuan; Hung, Yao-Chen

    2014-01-01

    Phase synchronization (PS) in a periodically pump-modulated two-mode solid state laser is investigated. Although PS in the laser system has been demonstrated in response to a periodic modulation with the main relaxation oscillation (RO) frequency of the free-running laser, little is known about the case of modulation with minor RO frequencies. In this Letter, the empirical mode decomposition (EMD) method is utilized to decompose the laser time series into a set of orthogonal modes and to examine the intrinsic PS near the frequency of the second RO. The degree of PS is quantified by means of a histogram of phase differences and the analysis of Shannon entropy. - Highlights: • We study the intrinsic phase synchronization in a periodically pump-modulated two-mode solid state laser. • The empirical mode decomposition method is utilized to define the intrinsic phase synchronization. • The degree of phase synchronization is quantified by a proposed synchronization coefficient

  13. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    Science.gov (United States)

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  14. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Abdeldjalil Aïssa-El-Bey

    2007-03-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  15. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Aïssa-El-Bey Abdeldjalil

    2007-01-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  16. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    Science.gov (United States)

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV

  17. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  18. Causality analysis of leading singular value decomposition modes identifies rotor as the dominant driving normal mode in fibrillation

    Science.gov (United States)

    Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer

    2018-01-01

    Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their

  19. Acoustics flow analysis in circular duct using sound intensity and dynamic mode decomposition

    International Nuclear Information System (INIS)

    Weyna, S

    2014-01-01

    Sound intensity generation in hard-walled duct with acoustic flow (no mean-flow) is treated experimentally and shown graphically. In paper, numerous methods of visualization illustrating the vortex flow (2D, 3D) can graphically explain diffraction and scattering phenomena occurring inside the duct and around open end area. Sound intensity investigation in annular duct gives a physical picture of sound waves in any duct mode. In the paper, modal energy analysis are discussed with particular reference to acoustics acoustic orthogonal decomposition (AOD). The image of sound intensity fields before and above 'cut-off' frequency region are found to compare acoustic modes which might resonate in duct. The experimental results show also the effects of axial and swirling flow. However acoustic field is extremely complicated, because pressures in non-propagating (cut-off) modes cooperate with the particle velocities in propagating modes, and vice versa. Measurement in cylindrical duct demonstrates also the cut-off phenomenon and the effect of reflection from open end. The aim of experimental study was to obtain information on low Mach number flows in ducts in order to improve physical understanding and validate theoretical CFD and CAA models that still may be improved.

  20. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  1. OBSERVATIONS OF SAUSAGE MODES IN MAGNETIC PORES

    International Nuclear Information System (INIS)

    Morton, R. J.; Erdelyi, R.; Jess, D. B.; Mathioudakis, M.

    2011-01-01

    We present here evidence for the observation of the magnetohydrodynamic (MHD) sausage modes in magnetic pores in the solar photosphere. Further evidence for the omnipresent nature of acoustic global modes is also found. The empirical decomposition method of wave analysis is used to identify the oscillations detected through a 4170 A 'blue continuum' filter observed with the Rapid Oscillations in the Solar Atmosphere (ROSA) instrument. Out of phase, periodic behavior in pore size and intensity is used as an indicator of the presence of magnetoacoustic sausage oscillations. Multiple signatures of the magnetoacoustic sausage mode are found in a number of pores. The periods range from as short as 30 s up to 450 s. A number of the magnetoacoustic sausage mode oscillations found have periods of 3 and 5 minutes, similar to the acoustic global modes of the solar interior. It is proposed that these global oscillations could be the driver of the sausage-type magnetoacoustic MHD wave modes in pores.

  2. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  3. Dynamic mode decomposition of turbulent cavity flows for self-sustained oscillations

    International Nuclear Information System (INIS)

    Seena, Abu; Sung, Hyung Jin

    2011-01-01

    Highlights: ► DMD modes were extracted from two cavity flow data set at Re D = 12,000 and 3000. ► At Re D = 3000, frequencies of boundary layer and shear layer structures coincides. ► Boundary layer structures exceed in size with shear layer structures. ► At Re D = 12,000, structure showed coherence leading to self-sustained oscillations. ► Hydrodynamic resonance occurs if coherence exists in wavenumber and frequency. - Abstract: Self-sustained oscillations in a cavity arise due to the unsteady separation of boundary layers at the leading edge. The dynamic mode decomposition method was employed to analyze the self-sustained oscillations. Two cavity flow data sets, with or without self-sustained oscillations and possessing thin or thick incoming boundary layers (Re D = 12,000 and 3000), were analyzed. The ratios between the cavity depth and the momentum thickness (D/θ) were 40 and 4.5, respectively, and the cavity aspect ratio was L/D = 2. The dynamic modes extracted from the thick boundary layer indicated that the upcoming boundary layer structures and the shear layer structures along the cavity lip line coexisted with coincident frequency space but with different wavenumber space, whereas structures with a thin boundary layer showed complete coherence among the modes to produce self-sustained oscillations. This result suggests that the hydrodynamic resonances that gave rise to the self-sustained oscillations occurred if the upcoming boundary layer structures and the shear layer structures coincided, not only in frequencies, but also in wavenumbers. The influences of the cavity dimensions and incoming momentum thickness on the self-sustained oscillations were examined.

  4. Ensemble empirical model decomposition and neuro-fuzzy conjunction model for middle and long-term runoff forecast

    Science.gov (United States)

    Tan, Q.

    2017-12-01

    Forecasting the runoff over longer periods, such as months and years, is one of the important tasks for hydrologists and water resource managers to maximize the potential of the limited water. However, due to the nonlinear and nonstationary characteristic of the natural runoff, it is hard to forecast the middle and long-term runoff with a satisfactory accuracy. It has been proven that the forecast performance can be improved by using signal decomposition techniques to product more cleaner signals as model inputs. In this study, a new conjunction model (EEMD-neuro-fuzzy) with adaptive ability is proposed. The ensemble empirical model decomposition (EEMD) is used to decompose the runoff time series into several components, which are with different frequencies and more cleaner than the original time series. Then the neuro-fuzzy model is developed for each component. The final forecast results can be obtained by summing the outputs of all neuro-fuzzy models. Unlike the conventional forecast model, the decomposition and forecast models in this study are adjusted adaptively as long as new runoff information is added. The proposed models are applied to forecast the monthly runoff of Yichang station, located in Yangtze River of China. The results show that the performance of adaptive forecast model we proposed outperforms than the conventional forecast model, the Nash-Sutcliffe efficiency coefficient can reach to 0.9392. Due to its ability to process the nonstationary data, the forecast accuracy, especially in flood season, is improved significantly.

  5. Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

    Directory of Open Access Journals (Sweden)

    Christofer Toumazou

    2013-07-01

    Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.

  6. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  7. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  8. Intramolecular energy transfer and mode-specific effects in unimolecular reactions of 1,2-difluoroethane

    Science.gov (United States)

    Raff, Lionel M.

    1989-06-01

    The unimolecular decomposition reactions of 1,2-difluoroethane upon mode-specific excitation to a total internal energy of 7.5 eV are investigated using classical trajectory methods and a previously formulated empirical potential-energy surface. The decomposition channels for 1,2-difluoroethane are, in order of importance, four-center HF elimination, C-C bond rupture, and hydrogen-atom dissociation. This order is found to be independent of the particular vibrational mode excited. Neither fluorine-atom nor F2 elimination reactions are ever observed even though these dissociation channels are energetically open. For four-center HF elimination, the average fraction of the total energy partitioned into internal HF motion varies between 0.115-0.181 depending upon the particular vibrational mode initially excited. The internal energy of the fluoroethylene product lies in the range 0.716-0.776. Comparison of the present results with those previously obtained for a random distribution of the initial 1,2-difluoroethane internal energy [J. Phys. Chem. 92, 5111 (1988)], shows that numerous mode-specific effects are present in these reactions in spite of the fact that intramolecular energy transfer rates for this system are 5.88-25.5 times faster than any of the unimolecular reaction rates. Mode-specific excitation always leads to a total decomposition rate significantly larger than that obtained for a random distribution of the internal energy. Excitation of different 1,2-difluoroethane vibrational modes is found to produce as much as a 51% change in the total decomposition rate. Mode-specific effects are also seen in the product energy partitioning. The rate coefficients for decomposition into the various channels are very sensitive to the particular mode excited. A comparison of the calculated mode-specific effects with the previously determined mode-to-mode energy transfer rate coefficients [J. Chem. Phys. 89, 5680 (1988)] shows that, to some extent, the presence of mode

  9. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  10. Effectiveness of Modal Decomposition for Tapping Atomic Force Microscopy Microcantilevers in Liquid Environment.

    Science.gov (United States)

    Kim, Il Kwang; Lee, Soo Il

    2016-05-01

    The modal decomposition of tapping mode atomic force microscopy microcantilevers in liquid environments was studied experimentally. Microcantilevers with different lengths and stiffnesses and two sample surfaces with different elastic moduli were used in the experiment. The response modes of the microcantilevers were extracted as proper orthogonal modes through proper orthogonal decomposition. Smooth orthogonal decomposition was used to estimate the resonance frequency directly. The effects of the tapping setpoint and the elastic modulus of the sample under test were examined in terms of their multi-mode responses with proper orthogonal modes, proper orthogonal values, smooth orthogonal modes and smooth orthogonal values. Regardless of the stiffness of the microcantilever under test, the first mode was dominant in tapping mode atomic force microscopy under normal operating conditions. However, at lower tapping setpoints, the flexible microcantilever showed modal distortion and noise near the tip when tapping on a hard sample. The stiff microcantilever had a higher mode effect on a soft sample at lower tapping setpoints. Modal decomposition for tapping mode atomic force microscopy can thus be used to estimate the characteristics of samples in liquid environments.

  11. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  12. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model

    Science.gov (United States)

    Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.

    2017-01-01

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  13. Fast modal decomposition for optical fibers using digital holography.

    Science.gov (United States)

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  14. Identification of dominant flow structures in rapidly rotating convection of liquid metals using Dynamic Mode Decomposition

    Science.gov (United States)

    Horn, S.; Schmid, P. J.; Aurnou, J. M.

    2016-12-01

    The Earth's metal core acts as a dynamo whose efficiency in generating and maintaining the magnetic field is essentially determined by the rotation rate and the convective motions occurring in its outer liquid part. For the description of the primary physics in the outer core the idealized system of rotating Rayleigh-Bénard convection is often invoked, with the majority of studies considering only working fluids with Prandtl numbers of Pr ≳ 1. However, liquid metals are characterized by distinctly smaller Prandtl numbers which in turn result in an inherently different type of convection. Here, we will present results from direct numerical simulations of rapidly rotating convection in a fluid with Pr ≈ 0.025 in cylindrical containers and Ekman numbers as low as 5 × 10-6. In this system, the Coriolis force is the source of two types of inertial modes, the so-called wall modes, that also exist at moderate Prandtl numbers, and cylinder-filling oscillatory modes, that are a unique feature of small Prandtl number convection. The obtained flow fields were analyzed using the Dynamic Mode Decomposition (DMD). This technique allows to extract and identify the structures that govern the dynamics of the system as well as their corresponding frequencies. We have investigated both the regime where the flow is purely oscillatory and the regime where wall modes and oscillatory modes co-exist. In the purely oscillatory regime, high and low frequency oscillatory modes characterize the flow. When both types of modes are present, the DMD reveals that the wall-attached modes dominate the flow dynamics. They precess with a relatively low frequency in retrograde direction. Nonetheless, also in this case, high frequency oscillations have a significant contribution.

  15. Empirical Study of Decomposition of CO2 Emission Factors in China

    Directory of Open Access Journals (Sweden)

    Yadong Ning

    2013-01-01

    Full Text Available China’s CO2 emissions increase has attracted world’s attention. It is of great importance to analyze China’s CO2 emission factors to restrain the CO2 rapid growing. The CO2 emissions of industrial and residential consumption sectors in China during 1980–2010 were calculated in this paper. The expanded decomposition model of CO2 emissions was set up by adopting factor-separating method based on the basic principle of the Kaya identities. The results showed that CO2 emissions of industrial and residential consumption sectors increase year after year, and the scale effect of GDP is the most important factor affecting CO2 emissions of industrial sector. Decreasing the specific gravity of secondary industry and energy intensity is more effective than decreasing the primary industry and tertiary industry. The emissions reduction effect of structure factor is better than the efficiency factor. For residential consumption sector, CO2 emissions increase rapidly year after year, and the economy factor (the increase of wealthy degree or income is the most important factor. In order to slow down the growth of CO2 emissions, it is an important way to change the economic growth mode, and the structure factor will become a crucial factor.

  16. Self-decomposition components generated from [sup 35]S-labeled amino acids

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takahisa; Saito, Kazumi; Kurihara, Norio (Kyoto Univ. (Japan). Radioisotope Research Center)

    1994-06-01

    We examined the fragment molecules in the gaseous components generated from [sup 35]S-amino acids with high specific radioactivity. The self-decomposition mode of a molecule labeled with a [beta]-emitter was similar to the fragmentation mode of organic compounds impacted by accelerated electrons as in organic mass spectrometry. Degradation products of unlabeled amino acids irradiated by [sup 60]Co [gamma]-ray indicated that the degradation mode induced by external [gamma]-rays irradiation was different from the self-decomposition mode of labeled compounds. (Author).

  17. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    Science.gov (United States)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  18. Prediction of trypsin/molecular fragment binding affinities by free energy decomposition and empirical scores

    Science.gov (United States)

    Benson, Mark L.; Faver, John C.; Ucisik, Melek N.; Dashti, Danial S.; Zheng, Zheng; Merz, Kenneth M.

    2012-05-01

    Two families of binding affinity estimation methodologies are described which were utilized in the SAMPL3 trypsin/fragment binding affinity challenge. The first is a free energy decomposition scheme based on a thermodynamic cycle, which included separate contributions from enthalpy and entropy of binding as well as a solvent contribution. Enthalpic contributions were estimated with PM6-DH2 semiempirical quantum mechanical interaction energies, which were modified with a statistical error correction procedure. Entropic contributions were estimated with the rigid-rotor harmonic approximation, and solvent contributions to the free energy were estimated with several different methods. The second general methodology is the empirical score LISA, which contains several physics-based terms trained with the large PDBBind database of protein/ligand complexes. Here we also introduce LISA+, an updated version of LISA which, prior to scoring, classifies systems into one of four classes based on a ligand's hydrophobicity and molecular weight. Each version of the two methodologies (a total of 11 methods) was trained against a compiled set of known trypsin binders available in the Protein Data Bank to yield scaling parameters for linear regression models. Both raw and scaled scores were submitted to SAMPL3. Variants of LISA showed relatively low absolute errors but also low correlation with experiment, while the free energy decomposition methods had modest success when scaling factors were included. Nonetheless, re-scaled LISA yielded the best predictions in the challenge in terms of RMS error, and six of these models placed in the top ten best predictions by RMS error. This work highlights some of the difficulties of predicting binding affinities of small molecular fragments to protein receptors as well as the benefit of using training data.

  19. Solving theoretical and empirical conundrums in international strategy research by matching foreign entry mode choices and performance

    NARCIS (Netherlands)

    Martin, Xavier

    2013-01-01

    Several theoretical and empirical developments in the literature on foreign entry mode and performance, and on (international) strategy more generally, were influenced or prefigured by Brouthers’ (2002) JIBS Decade Award winning paper. Regarding theory, Brouthers is an archetype of the integration

  20. An Enhanced Empirical Wavelet Transform for Features Extraction from Wind Turbine Condition Monitoring Signals

    Directory of Open Access Journals (Sweden)

    Pu Shi

    2017-07-01

    Full Text Available Feature extraction from nonlinear and non-stationary (NNS wind turbine (WT condition monitoring (CM signals is challenging. Previously, much effort has been spent to develop advanced signal processing techniques for dealing with CM signals of this kind. The Empirical Wavelet Transform (EWT is one of the achievements attributed to these efforts. The EWT takes advantage of Empirical Mode Decomposition (EMD in dealing with NNS signals but is superior to the EMD in mode decomposition and robustness against noise. However, the conventional EWT meets difficulty in properly segmenting the frequency spectrum of the signal, especially when lacking pre-knowledge of the signal. The inappropriate segmentation of the signal spectrum will inevitably lower the accuracy of the EWT result and thus raise the difficulty of WT CM. To address this issue, an enhanced EWT is proposed in this paper by developing a feasible and efficient spectrum segmentation method. The effectiveness of the proposed method has been verified by using the bearing and gearbox CM data that are open to the public for the purpose of research. The experiment has shown that, after adopting the proposed method, it becomes much easier and more reliable to segment the frequency spectrum of the signal. Moreover, benefitting from the correct segmentation of the signal spectrum, the fault-related features of the CM signals are presented more explicitly in the time-frequency map of the enhanced EWT, despite the considerable noise contained in the signal and the shortage of pre-knowledge about the machine being investigated.

  1. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  2. Modeling multipulsing transition in ring cavity lasers with proper orthogonal decomposition

    International Nuclear Information System (INIS)

    Ding, Edwin; Shlizerman, Eli; Kutz, J. Nathan

    2010-01-01

    A low-dimensional model is constructed via the proper orthogonal decomposition (POD) to characterize the multipulsing phenomenon in a ring cavity laser mode locked by a saturable absorber. The onset of the multipulsing transition is characterized by an oscillatory state (created by a Hopf bifurcation) that is then itself destabilized to a double-pulse configuration (by a fold bifurcation). A four-mode POD analysis, which uses the principal components, or singular value decomposition modes, of the mode-locked laser, provides a simple analytic framework for a complete characterization of the entire transition process and its associated bifurcations. These findings are in good agreement with the full governing equation.

  3. Dual decomposition for parsing with non-projective head automata

    OpenAIRE

    Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander

    2010-01-01

    This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...

  4. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    Science.gov (United States)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  5. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  6. Elastic Wave-equation Reflection Traveltime Inversion Using Dynamic Warping and Wave Mode Decomposition

    KAUST Repository

    Wang, T.

    2017-05-26

    Elastic full waveform inversion (EFWI) provides high-resolution parameter estimation of the subsurface but requires good initial guess of the true model. The traveltime inversion only minimizes traveltime misfits which are more sensitive and linearly related to the low-wavenumber model perturbation. Therefore, building initial P and S wave velocity models for EFWI by using elastic wave-equation reflections traveltime inversion (WERTI) would be effective and robust, especially for the deeper part. In order to distinguish the reflection travletimes of P or S-waves in elastic media, we decompose the surface multicomponent data into vector P- and S-wave seismogram. We utilize the dynamic image warping to extract the reflected P- or S-wave traveltimes. The P-wave velocity are first inverted using P-wave traveltime followed by the S-wave velocity inversion with S-wave traveltime, during which the wave mode decomposition is applied to the gradients calculation. Synthetic example on the Sigbee2A model proves the validity of our method for recovering the long wavelength components of the model.

  7. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  8. Multilevel domain decomposition for electronic structure calculations

    International Nuclear Information System (INIS)

    Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.

    2007-01-01

    We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure

  9. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    Science.gov (United States)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  10. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    Science.gov (United States)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  11. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    Science.gov (United States)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  12. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator.

    Science.gov (United States)

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  13. Spatio-Temporal Multiway Data Decomposition Using Principal Tensor Analysis on k-Modes: The R Package PTAk

    Directory of Open Access Journals (Sweden)

    Didier G. Leibovici

    2010-10-01

    Full Text Available The purpose of this paper is to describe the R package {PTAk and how the spatio-temporal context can be taken into account in the analyses. Essentially PTAk( is a multiway multidimensional method to decompose a multi-entries data-array, seen mathematically as a tensor of any order. This PTAk-modes method proposes a way of generalizing SVD (singular value decomposition, as well as some other well known methods included in the R package, such as PARAFAC or CANDECOMP and the PCAn-modes or Tucker-n model. The example datasets cover different domains with various spatio-temporal characteristics and issues: (i~medical imaging in neuropsychology with a functional MRI (magnetic resonance imaging study, (ii~pharmaceutical research with a pharmacodynamic study with EEG (electro-encephaloegraphic data for a central nervous system (CNS drug, and (iii~geographical information system (GIS with a climatic dataset that characterizes arid and semi-arid variations. All the methods implemented in the R package PTAk also support non-identity metrics, as well as penalizations during the optimization process. As a result of these flexibilities, together with pre-processing facilities, PTAk constitutes a framework for devising extensions of multidimensional methods such ascorrespondence analysis, discriminant analysis, and multidimensional scaling, also enabling spatio-temporal constraints.

  14. The Fourier decomposition method for nonlinear and non-stationary time series analysis.

    Science.gov (United States)

    Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-03-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.

  15. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  16. On practical challenges of decomposition-based hybrid forecasting algorithms for wind speed and solar irradiation

    International Nuclear Information System (INIS)

    Wang, Yamin; Wu, Lei

    2016-01-01

    This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to

  17. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    Science.gov (United States)

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  18. Inertial modes of rigidly rotating neutron stars in Cowling approximation

    International Nuclear Information System (INIS)

    Kastaun, Wolfgang

    2008-01-01

    In this article, we investigate inertial modes of rigidly rotating neutron stars, i.e. modes for which the Coriolis force is dominant. This is done using the assumption of a fixed spacetime (Cowling approximation). We present frequencies and eigenfunctions for a sequence of stars with a polytropic equation of state, covering a broad range of rotation rates. The modes were obtained with a nonlinear general relativistic hydrodynamic evolution code. We further show that the eigenequations for the oscillation modes can be written in a particularly simple form for the case of arbitrary fast but rigid rotation. Using these equations, we investigate some general characteristics of inertial modes, which are then compared to the numerically obtained eigenfunctions. In particular, we derive a rough analytical estimate for the frequency as a function of the number of nodes of the eigenfunction, and find that a similar empirical relation matches the numerical results with unexpected accuracy. We investigate the slow rotation limit of the eigenequations, obtaining two different sets of equations describing pressure and inertial modes. For the numerical computations we only considered axisymmetric modes, while the analytic part also covers nonaxisymmetric modes. The eigenfunctions suggest that the classification of inertial modes by the quantum numbers of the leading term of a spherical harmonic decomposition is artificial in the sense that the largest term is not strongly dominant, even in the slow rotation limit. The reason for the different structure of pressure and inertial modes is that the Coriolis force remains important in the slow rotation limit only for inertial modes. Accordingly, the scalar eigenequation we obtain in that limit is spherically symmetric for pressure modes, but not for inertial modes

  19. X-Ray Thomson Scattering Without the Chihara Decomposition

    Science.gov (United States)

    Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration

    X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.

  20. The processing of aluminum gasarites via thermal decomposition of interstitial hydrides

    Science.gov (United States)

    Licavoli, Joseph J.

    Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.

  1. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  2. Discrimination of nuclear-explosion and lightning electromagnetic pulse

    International Nuclear Information System (INIS)

    Qi Shufeng; Li Ximei; Han Shaoqing; Niu Chao; Feng Jun; Liu Daizhi

    2012-01-01

    The discrimination of nuclear-explosion and lightning electromagnetic pulses was studied using empirical mode decomposition and the fractal analytical method. The box dimensions of nuclear-explosion and lightning electromagnetic pulses' original signals were calculated, and the box dimensions of the intrinsic mode functions (IMFs) of nuclear-explosion and lightning electromagnetic pulses' original signals after empirical mode decomposition were also obtained. The discrimination of nuclear explosion and lightning was studied using the nearest neighbor classification. The experimental results show that, the discrimination rate of the box dimension based on the first and second IMF after the original signal empirical mode decomposition is higher than that based on the third and forth IMF; the discrimination rate of the box dimension based on the original signal is higher than that based on any IMF; and the discrimination rate based on two-dimensional and three-dimensional characters is higher and more stable than that based on one-dimensional character, besides, the discrimination rate based on three-dimensional character is over 90%. (authors)

  3. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    Science.gov (United States)

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  4. Empirical dual energy calibration (EDEC) for cone-beam computed tomography

    International Nuclear Information System (INIS)

    Stenner, Philip; Berkus, Timo; Kachelriess, Marc

    2007-01-01

    Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p 1 and p 2 are obtained as functions of the measured attenuation data q 1 and q 2 (one DECT scan=two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical μ values and density values. Since EDEC is an empirical technique it inherently compensates for scatter

  5. Revisiting the Granger Causality Relationship between Energy Consumption and Economic Growth in China: A Multi-Timescale Decomposition Approach

    Directory of Open Access Journals (Sweden)

    Lei Jiang

    2017-12-01

    Full Text Available The past four decades have witnessed rapid growth in the rate of energy consumption in China. A great deal of energy consumption has led to two major issues. One is energy shortages and the other is environmental pollution caused by fossil fuel combustion. Since energy saving plays a substantial role in addressing both issues, it is of vital importance to study the intrinsic characteristics of energy consumption and its relationship with economic growth. The topic of the nexus between energy consumption and economic growth has been hotly debated for years. However, conflicting conclusions have been drawn. In this paper, we provide a novel insight into the characteristics of the growth rate of energy consumption in China from a multi-timescale perspective by means of adaptive time-frequency data analysis; namely, the ensemble empirical mode decomposition method, which is suitable for the analysis of non-linear time series. Decomposition led to four intrinsic mode function (IMF components and a trend component with different periods. Then, we repeated the same procedure for the growth rate of China’s GDP and obtained four similar IMF components and a trend component. In the second stage, we performed the Granger causality test. The results demonstrated that, in the short run, there was a bidirectional causality relationship between economic growth and energy consumption, and in the long run a unidirectional relationship running from economic growth to energy consumption.

  6. Microbial decomposition of keratin in nature—a new hypothesis of industrial relevance

    DEFF Research Database (Denmark)

    Lange, Lene; Huang, Yuhong; Kamp Busk, Peter

    2016-01-01

    with the keratinases to loosen the molecular structure, thus giving the enzymes access to their substrate, the protein structure. With such complexity, it is relevant to compare microbial keratin decomposition with the microbial decomposition of well-studied polymers such as cellulose and chitin. Interestingly...... enzymatic and boosting factors needed for keratin breakdown have been used to formulate a hypothesis for mode of action of the LPMOs in keratin decomposition and for a model for degradation of keratin in nature. Testing such hypotheses and models still needs to be done. Even now, the hypothesis can serve...

  7. Schmidt decomposition for non-collinear biphoton angular wave functions

    International Nuclear Information System (INIS)

    Fedorov, M V

    2015-01-01

    Schmidt modes of non-collinear biphoton angular wave functions are found analytically. The experimentally realizable procedure for their separation is described. Parameters of the Schmidt decomposition are used to evaluate the degree of the biphoton's angular entanglement. (paper)

  8. PROBLEMS WITH WIREDU'S EMPIRICALISM Martin Odei Ajei1 ...

    African Journals Online (AJOL)

    In his “Empiricalism: The Empirical Character of an African Philosophy”,. Kwasi Wiredu sets out ... others, that an empirical metaphysical system contains both empirical ..... realms which multiple categories of existents inhabit and conduct their being in .... to a mode of reasoning that conceives categories polarized by formal.

  9. Towards automated human gait disease classification using phase space representation of intrinsic mode functions

    Science.gov (United States)

    Pratiher, Sawon; Patra, Sayantani; Pratiher, Souvik

    2017-06-01

    A novel analytical methodology for segregating healthy and neurological disorders from gait patterns is proposed by employing a set of oscillating components called intrinsic mode functions (IMF's). These IMF's are generated by the Empirical Mode Decomposition of the gait time series and the Hilbert transformed analytic signal representation forms the complex plane trace of the elliptical shaped analytic IMFs. The area measure and the relative change in the centroid position of the polygon formed by the Convex Hull of these analytic IMF's are taken as the discriminative features. Classification accuracy of 79.31% with Ensemble learning based Adaboost classifier validates the adequacy of the proposed methodology for a computer aided diagnostic (CAD) system for gait pattern identification. Also, the efficacy of several potential biomarkers like Bandwidth of Amplitude Modulation and Frequency Modulation IMF's and it's Mean Frequency from the Fourier-Bessel expansion from each of these analytic IMF's has been discussed for its potency in diagnosis of gait pattern identification and classification.

  10. On Convergence of Extended Dynamic Mode Decomposition to the Koopman Operator

    Science.gov (United States)

    Korda, Milan; Mezić, Igor

    2018-04-01

    Extended dynamic mode decomposition (EDMD) (Williams et al. in J Nonlinear Sci 25(6):1307-1346, 2015) is an algorithm that approximates the action of the Koopman operator on an N-dimensional subspace of the space of observables by sampling at M points in the state space. Assuming that the samples are drawn either independently or ergodically from some measure μ , it was shown in Klus et al. (J Comput Dyn 3(1):51-79, 2016) that, in the limit as M→ ∞, the EDMD operator K_{N,M} converges to K_N, where K_N is the L_2(μ )-orthogonal projection of the action of the Koopman operator on the finite-dimensional subspace of observables. We show that, as N → ∞, the operator K_N converges in the strong operator topology to the Koopman operator. This in particular implies convergence of the predictions of future values of a given observable over any finite time horizon, a fact important for practical applications such as forecasting, estimation and control. In addition, we show that accumulation points of the spectra of K_N correspond to the eigenvalues of the Koopman operator with the associated eigenfunctions converging weakly to an eigenfunction of the Koopman operator, provided that the weak limit of the eigenfunctions is nonzero. As a by-product, we propose an analytic version of the EDMD algorithm which, under some assumptions, allows one to construct K_N directly, without the use of sampling. Finally, under additional assumptions, we analyze convergence of K_{N,N} (i.e., M=N), proving convergence, along a subsequence, to weak eigenfunctions (or eigendistributions) related to the eigenmeasures of the Perron-Frobenius operator. No assumptions on the observables belonging to a finite-dimensional invariant subspace of the Koopman operator are required throughout.

  11. Global mode decomposition of supersonic impinging jet noise

    Science.gov (United States)

    Hildebrand, Nathaniel; Nichols, Joseph W.

    2015-11-01

    We apply global stability analysis to an ideally expanded, Mach 1.5, turbulent jet that impinges on a flat surface. The analysis extracts axisymmetric and helical instability modes, involving coherent vortices, shocks, and acoustic feedback, which we use to help explain and predict the effectiveness of microjet control. High-fidelity large eddy simulations (LES) were performed at nozzle-to-wall distances of 4 and 4.5 throat diameters with and without sixteen microjets positioned uniformly around the nozzle lip. These flow configurations conform exactly to experiments performed at Florida State University. Stability analysis about LES mean fields predicted the least stable global mode with a frequency that matched the impingement tone observed in experiments at a nozzle-to-wall distance of 4 throat diameters. The Reynolds-averaged Navier-Stokes (RANS) equations were solved at five nozzle-to-wall distances to create base flows that were used to investigate the influence of this parameter. A comparison of the eigenvalue spectra computed from the stability analysis about LES and RANS base flows resulted in good agreement. We also investigate the effect of the boundary layer state as it emerges from the nozzle using a multi-block global mode solver. Computational resources were provided by the Argonne Leadership Computing Facility.

  12. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    Science.gov (United States)

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. GPR random noise reduction using BPD and EMD

    Science.gov (United States)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  14. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  15. Extraction of the mode shapes of a segmented ship model with a hydroelastic response

    Directory of Open Access Journals (Sweden)

    Yooil Kim

    2015-11-01

    Full Text Available The mode shapes of a segmented hull model towed in a model basin were predicted using both the Proper Orthogonal Decomposition (POD and cross random decrement technique. The proper orthogonal decomposition, which is also known as Karhunen-Loeve decomposition, is an emerging technology as a useful signal processing technique in structural dynamics. The technique is based on the fact that the eigenvectors of a spatial coherence matrix become the mode shapes of the system under free and randomly excited forced vibration conditions. Taking advantage of the sim-plicity of POD, efforts have been made to reveal the mode shapes of vibrating flexible hull under random wave ex-citation. First, the segmented hull model of a 400 K ore carrier with 3 flexible connections was towed in a model basin under different sea states and the time histories of the vertical bending moment at three different locations were meas-ured. The measured response time histories were processed using the proper orthogonal decomposition, eventually to obtain both the first and second vertical vibration modes of the flexible hull. A comparison of the obtained mode shapes with those obtained using the cross random decrement technique showed excellent correspondence between the two results.

  16. Locally extracting scalar, vector and tensor modes in cosmological perturbation theory

    International Nuclear Information System (INIS)

    Clarkson, Chris; Osano, Bob

    2011-01-01

    Cosmological perturbation theory relies on the decomposition of perturbations into so-called scalar, vector and tensor modes. This decomposition is non-local and depends on unknowable boundary conditions. The non-locality is particularly important at second and higher order because perturbative modes are sourced by products of lower order modes, which must be integrated over all space in order to isolate each mode. However, given a trace-free rank-2 tensor, a locally defined scalar mode may be trivially derived by taking two divergences, which knocks out the vector and tensor degrees of freedom. A similar local differential operation will return a pure vector mode. This means that scalar and vector degrees of freedom have local descriptions. The corresponding local extraction of the tensor mode is unknown however. We give it here. The operators we define are useful for defining gauge-invariant quantities at second order. We perform much of our analysis using an index-free 'vector-calculus' approach which makes manipulating tensor equations considerably simpler. (papers)

  17. Decomposition of the swirling flow field downstream of Francis turbine runner

    International Nuclear Information System (INIS)

    Rudolf, P; Štefan, D

    2012-01-01

    Practical application of proper orthogonal decomposition (POD) is presented. Spatio-temporal behaviour of the coherent vortical structures in the draft tube of hydraulic turbine is studied for two partial load operating points. POD enables to identify the eigen modes, which compose the flow field and rank the modes according to their energy. Swirling flow fields are decomposed, which provides information about their streamwise and crosswise development and the energy transfer among modes. Presented methodology also assigns frequencies to the particular modes, which helps to identify the spectral properties of the flow with concrete mode shapes. Thus POD offers a complementary view to current time domain simulations or measurements.

  18. High-temperature Raman study of L-alanine, L-threonine and taurine crystals related to thermal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Cavaignac, A.L.O. [Centro de Ciências Sociais, Saúde e Tecnologia, Universidade Federal do Maranhão, Imperatriz, MA 65900-410 (Brazil); Lima, R.J.C., E-mail: ricardo.lima.ufma@gmail.com [Centro de Ciências Sociais, Saúde e Tecnologia, Universidade Federal do Maranhão, Imperatriz, MA 65900-410 (Brazil); Façanha Filho, P.F. [Centro de Ciências Sociais, Saúde e Tecnologia, Universidade Federal do Maranhão, Imperatriz, MA 65900-410 (Brazil); Moreno, A.J.D. [Coordenação de Ciências Naturais, Universidade Federal do Maranhão, Bacabal, MA 65700-000 (Brazil); Freire, P.T.C. [Departamento de Física, Universidade Federal do Ceará, Fortaleza, CE 60455-760 (Brazil)

    2016-03-01

    In this work high-temperature Raman spectra are used to compare temperature dependence of the lattice mode wavenumber of L-alanine, L-threonine and taurine crystals. Anharmonic effects observed are associated with intermolecular N-H· · ·O hydrogen bond that plays an important role in thermal decomposition process of these materials. Short and strong hydrogen bonds in L-alanine crystal were associated with anharmonic effects in lattice modes leading to low thermal stability compared to taurine crystals. Connection between thermal decomposition process and anharmonic effects is furnished for the first time.

  19. High-temperature Raman study of L-alanine, L-threonine and taurine crystals related to thermal decomposition

    International Nuclear Information System (INIS)

    Cavaignac, A.L.O.; Lima, R.J.C.; Façanha Filho, P.F.; Moreno, A.J.D.; Freire, P.T.C.

    2016-01-01

    In this work high-temperature Raman spectra are used to compare temperature dependence of the lattice mode wavenumber of L-alanine, L-threonine and taurine crystals. Anharmonic effects observed are associated with intermolecular N-H· · ·O hydrogen bond that plays an important role in thermal decomposition process of these materials. Short and strong hydrogen bonds in L-alanine crystal were associated with anharmonic effects in lattice modes leading to low thermal stability compared to taurine crystals. Connection between thermal decomposition process and anharmonic effects is furnished for the first time.

  20. Mode regularization of the supersymmetric sphaleron and kink: Zero modes and discrete gauge symmetry

    International Nuclear Information System (INIS)

    Goldhaber, Alfred Scharff; Litvintsev, Andrei; Nieuwenhuizen, Peter van

    2001-01-01

    To obtain the one-loop corrections to the mass of a kink by mode regularization, one may take one-half the result for the mass of a widely separated kink-antikink (or sphaleron) system, where the two bosonic zero modes count as two degrees of freedom, but the two fermionic zero modes as only one degree of freedom in the sums over modes. For a single kink, there is one bosonic zero mode degree of freedom, but it is necessary to average over four sets of fermionic boundary conditions in order (i) to preserve the fermionic Z 2 gauge invariance ψ→-ψ, (ii) to satisfy the basic principle of mode regularization that the boundary conditions in the trivial and the kink sector should be the same, (iii) that the energy stored at the boundaries cancels and (iv) to avoid obtaining a finite, uniformly distributed energy which would violate cluster decomposition. The average number of fermionic zero-energy degrees of freedom in the presence of the kink is then indeed 1/2. For boundary conditions leading to only one fermionic zero-energy solution, the Z 2 gauge invariance identifies two seemingly distinct 'vacua' as the same physical ground state, and the single fermionic zero-energy solution does not correspond to a degree of freedom. Other boundary conditions lead to two spatially separated ω∼0 solutions, corresponding to one (spatially delocalized) degree of freedom. This nonlocality is consistent with the principle of cluster decomposition for correlators of observables

  1. Quantitative elementary mode analysis of metabolic pathways: the example of yeast glycolysis

    Directory of Open Access Journals (Sweden)

    Kanehisa Minoru

    2006-04-01

    Full Text Available Abstract Background Elementary mode analysis of metabolic pathways has proven to be a valuable tool for assessing the properties and functions of biochemical systems. However, little comprehension of how individual elementary modes are used in real cellular states has been achieved so far. A quantitative measure of fluxes carried by individual elementary modes is of great help to identify dominant metabolic processes, and to understand how these processes are redistributed in biological cells in response to changes in environmental conditions, enzyme kinetics, or chemical concentrations. Results Selecting a valid decomposition of a flux distribution onto a set of elementary modes is not straightforward, since there is usually an infinite number of possible such decompositions. We first show that two recently introduced decompositions are very closely related and assign the same fluxes to reversible elementary modes. Then, we show how such decompositions can be used in combination with kinetic modelling to assess the effects of changes in enzyme kinetics on the usage of individual metabolic routes, and to analyse the range of attainable states in a metabolic system. This approach is illustrated by the example of yeast glycolysis. Our results indicate that only a small subset of the space of stoichiometrically feasible steady states is actually reached by the glycolysis system, even when large variation intervals are allowed for all kinetic parameters of the model. Among eight possible elementary modes, the standard glycolytic route remains dominant in all cases, and only one other elementary mode is able to gain significant flux values in steady state. Conclusion These results indicate that a combination of structural and kinetic modelling significantly constrains the range of possible behaviours of a metabolic system. All elementary modes are not equal contributors to physiological cellular states, and this approach may open a direction toward a

  2. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Correlation of Respiratory Signals and Electrocardiogram Signals via Empirical Mode Decomposition

    KAUST Repository

    El Fiky, Ahmed Osama

    2011-01-01

    research field for signal processing experts to ensure better and clear representation of the different cardiac activities. Given the nonlinear and non-stationary properties of ECGs, it is not a simple task to cancel the undesired noise terms without

  4. Excursions through KK modes

    Energy Technology Data Exchange (ETDEWEB)

    Furuuchi, Kazuyuki [Manipal Centre for Natural Sciences, Manipal University,Manipal, Karnataka 576104 (India)

    2016-07-07

    In this article we study Kaluza-Klein (KK) dimensional reduction of massive Abelian gauge theories with charged matter fields on a circle. Since local gauge transformations change position dependence of the charged fields, the decomposition of the charged matter fields into KK modes is gauge dependent. While whole KK mass spectrum is independent of the gauge choice, the mode number depends on the gauge. The masses of the KK modes also depend on the field value of the zero-mode of the extra dimensional component of the gauge field. In particular, one of the KK modes in the KK tower of each massless 5D charged field becomes massless at particular values of the extra-dimensional component of the gauge field. When the extra-dimensional component of the gauge field is identified with the inflaton, this structure leads to recursive cosmological particle productions.

  5. Excursions through KK modes

    International Nuclear Information System (INIS)

    Furuuchi, Kazuyuki

    2016-01-01

    In this article we study Kaluza-Klein (KK) dimensional reduction of massive Abelian gauge theories with charged matter fields on a circle. Since local gauge transformations change position dependence of the charged fields, the decomposition of the charged matter fields into KK modes is gauge dependent. While whole KK mass spectrum is independent of the gauge choice, the mode number depends on the gauge. The masses of the KK modes also depend on the field value of the zero-mode of the extra dimensional component of the gauge field. In particular, one of the KK modes in the KK tower of each massless 5D charged field becomes massless at particular values of the extra-dimensional component of the gauge field. When the extra-dimensional component of the gauge field is identified with the inflaton, this structure leads to recursive cosmological particle productions.

  6. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2015-09-01

    Full Text Available In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland.

  7. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.

    2014-05-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.

  8. Empirical P-L-C relations for delta Scuti stars

    International Nuclear Information System (INIS)

    Gupta, S.K.

    1978-01-01

    Separate P-L-C relations have been empirically derived by sampling the delta Scuti stars according to their pulsation modes. The results based on these relations have been compared with those estimated from the model based P-L-C relations and the other existing empirical P-L-C relations. It is found that a separate P-L-C relation for each pulsation mode provides a better correspondence with observations. (Auth.)

  9. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    Directory of Open Access Journals (Sweden)

    Zaichao Ma

    2015-04-01

    Full Text Available Empirical Mode Decomposition (EMD, due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD, has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD integrating Phase Space Reconstruction (PSR and Manifold Learning (ML for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise.

  10. GPR Signal Denoising and Target Extraction With the CEEMD Method

    KAUST Repository

    Li, Jing

    2015-04-17

    In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.

  11. Spectral decomposition of nonlinear systems with memory

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  12. Investigation of the capability of the Compact Polarimetry mode to Reconstruct Full Polarimetry mode using RADARSAT2 data

    Directory of Open Access Journals (Sweden)

    S. Boularbah

    2012-06-01

    Full Text Available Recently, there has been growing interest in dual-pol systems that transmit one polarization and receive two polarizations. Souyris et al. proposed a DP mode called compact polarimetry (CP which is able to reduce the complexity, cost, mass, and data rate of a SAR system while attempting to maintain many capabilities of a fully polarimetric system. This paper provides a comparison of the information content of full quad-pol data and the pseudo quad-pol data derived from compact polarimetric SAR modes. A pseudo-covariance matrix can be reconstructed following Souyris’s approach and is shown to be similar to the full polarimetric (FP covariance matrix. Both the polarimetric signatures based on the kennaugh matrix and the Freeman and Durden decomposition in the context of this compact polarimetry mode are explored. The Freeman and Durden decomposition is used in our study because of its direct relationship to the reflection symmetry. We illustrate our results by using the polarimetric SAR images of Algiers city in Algeria acquired by the RadarSAT2 in C-band.

  13. Dynamics Simulations and Statistical Modeling of Thermal Decomposition of 1-Ethyl-3-methylimidazolium Dicyanamide and 1-Ethyl-2,3-dimethylimidazolium Dicyanamide

    Science.gov (United States)

    2014-10-02

    reaction pathway. Vibrational frequencies and zero-point energies ( ZPE ) were scaled by a factor of 0.955 and 0.981,27 respectively. The corrected ZPE were...phases for different modes. Each molecule has ZPE in all vibrational modes. Because the decomposition time scale at typical pyrolysis temperatures would be...vibration of EMIM+DCA− and products, including ZPE . It is obvious that the time scale of decomposition depends on the simulation temperature. In this

  14. MADCam: The multispectral active decomposition camera

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille

    2001-01-01

    A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...... that utilised information drawn from the temporal dimension instead of the traditional spatial approach. Using the CIF format (352x288) frame rates up to 30 Hz are obtained and in VGA mode (640x480) up to 15 Hz....

  15. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  16. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    Science.gov (United States)

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  17. Applications of tensor (multiway array) factorizations and decompositions in data mining

    DEFF Research Database (Denmark)

    Mørup, Morten

    2011-01-01

    Tensor (multiway array) factorization and decomposition has become an important tool for data mining. Fueled by the computational power of modern computer researchers can now analyze large-scale tensorial structured data that only a few years ago would have been impossible. Tensor factorizations...... have several advantages over two-way matrix factorizations including uniqueness of the optimal solution and component identification even when most of the data is missing. Furthermore, multiway decomposition techniques explicitly exploit the multiway structure that is lost when collapsing some...... of the modes of the tensor in order to analyze the data by regular matrix factorization approaches. Multiway decomposition is being applied to new fields every year and there is no doubt that the future will bring many exciting new applications. The aim of this overview is to introduce the basic concepts...

  18. Empirical Research on China’s Carbon Productivity Decomposition Model Based on Multi-Dimensional Factors

    Directory of Open Access Journals (Sweden)

    Jianchang Lu

    2015-04-01

    Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.

  19. Infrared multiphoton absorption and decomposition

    International Nuclear Information System (INIS)

    Evans, D.K.; McAlpine, R.D.

    1984-01-01

    The discovery of infrared laser induced multiphoton absorption (IRMPA) and decomposition (IRMPD) by Isenor and Richardson in 1971 generated a great deal of interest in these phenomena. This interest was increased with the discovery by Ambartzumian, Letokhov, Ryadbov and Chekalin that isotopically selective IRMPD was possible. One of the first speculations about these phenomena was that it might be possible to excite a particular mode of a molecule with the intense infrared laser beam and cause decomposition or chemical reaction by channels which do not predominate thermally, thus providing new synthetic routes for complex chemicals. The potential applications to isotope separation and novel chemistry stimulated efforts to understand the underlying physics and chemistry of these processes. At ICOMP I, in 1977 and at ICOMP II in 1980, several authors reviewed the current understandings of IRMPA and IRMPD as well as the particular aspect of isotope separation. There continues to be a great deal of effort into understanding IRMPA and IRMPD and we will briefly review some aspects of these efforts with particular emphasis on progress since ICOMP II. 31 references

  20. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    Science.gov (United States)

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  1. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  2. Robust-mode analysis of hydrodynamic flows

    Science.gov (United States)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  3. Mental Task Classification Scheme Utilizing Correlation Coefficient Extracted from Interchannel Intrinsic Mode Function.

    Science.gov (United States)

    Rahman, Md Mostafizur; Fattah, Shaikh Anowarul

    2017-01-01

    In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.

  4. Decomposition Technique for Remaining Useful Life Prediction

    Science.gov (United States)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  5. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  6. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    Science.gov (United States)

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  7. A Systematic Assessment of Empirical Research on Foreign Entry Mode

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2016-01-01

    of dimensions. Findings Findings question the frequent use of commonly used measures (e.g. advertising intensity) and control variables (e.g. firm size) and suggest that statements about the importance of mode choice for subsidiary performance may be premature. Methodologically, this study identifies critical...... issues with regard to interpretation of interactions and the entry mode choice set. Research limitations/implications This study limits itself to study the direction of relationships and does not analyze effect sizes. Further, future research may benefit from broadening the entry mode choice by extending...

  8. Automated Identification of MHD Mode Bifurcation and Locking in Tokamaks

    Science.gov (United States)

    Riquezes, J. D.; Sabbagh, S. A.; Park, Y. S.; Bell, R. E.; Morton, L. A.

    2017-10-01

    Disruption avoidance is critical in reactor-scale tokamaks such as ITER to maintain steady plasma operation and avoid damage to device components. A key physical event chain that leads to disruptions is the appearance of rotating MHD modes, their slowing by resonant field drag mechanisms, and their locking. An algorithm has been developed that automatically detects bifurcation of the mode toroidal rotation frequency due to loss of torque balance under resonant braking, and mode locking for a set of shots using spectral decomposition. The present research examines data from NSTX, NSTX-U and KSTAR plasmas which differ significantly in aspect ratio (ranging from A = 1.3 - 3.5). The research aims to examine and compare the effectiveness of different algorithms for toroidal mode number discrimination, such as phase matching and singular value decomposition approaches, and to examine potential differences related to machine aspect ratio (e.g. mode eigenfunction shape variation). Simple theoretical models will be compared to the dynamics found. Main goals are to detect or potentially forecast the event chain early during a discharge. This would serve as a cue to engage active mode control or a controlled plasma shutdown. Supported by US DOE Contracts DE-SC0016614 and DE-AC02-09CH11466.

  9. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  10. A Research on Maximum Symbolic Entropy from Intrinsic Mode Function and Its Application in Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhuofei Xu

    2017-01-01

    Full Text Available Empirical mode decomposition (EMD is a self-adaptive analysis method for nonlinear and nonstationary signals. It has been widely applied to machinery fault diagnosis and structural damage detection. A novel feature, maximum symbolic entropy of intrinsic mode function based on EMD, is proposed to enhance the ability of recognition of EMD in this paper. First, a signal is decomposed into a collection of intrinsic mode functions (IMFs based on the local characteristic time scale of the signal, and then IMFs are transformed into a serious of symbolic sequence with different parameters. Second, it can be found that the entropies of symbolic IMFs are quite different. However, there is always a maximum value for a certain symbolic IMF. Third, take the maximum symbolic entropy as features to describe IMFs from a signal. Finally, the proposed features are applied to evaluate the effect of maximum symbolic entropy in fault diagnosis of rolling bearing, and then the maximum symbolic entropy is compared with other standard time analysis features in a contrast experiment. Although maximum symbolic entropy is only a time domain feature, it can reveal the signal characteristic information accurately. It can also be used in other fields related to EMD method.

  11. Non-Linear Non Stationary Analysis of Two-Dimensional Time-Series Applied to GRACE Data, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovative two-dimensional (2D) empirical mode decomposition (EMD) analysis was applied to NASA's Gravity Recovery and Climate Experiment (GRACE)...

  12. Leakage detection in galvanized iron pipelines using ensemble empirical mode decomposition analysis

    Science.gov (United States)

    Amin, Makeen; Ghazali, M. Fairusham

    2015-05-01

    There are many numbers of possible approaches to detect leaks. Some leaks are simply noticeable when the liquids or water appears on the surface. However many leaks do not find their way to the surface and the existence has to be check by analysis of fluid flow in the pipeline. The first step is to determine the approximate position of leak. This can be done by isolate the sections of the mains in turn and noting which section causes a drop in the flow. Next approach is by using sensor to locate leaks. This approach are involves strain gauge pressure transducers and piezoelectric sensor. the occurrence of leaks and know its exact location in the pipeline by using specific method which are Acoustic leak detection method and transient method. The objective is to utilize the signal processing technique in order to analyse leaking in the pipeline. With this, an EEMD method will be applied as the analysis method to collect and analyse the data.

  13. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  14. Functional traits drive the contribution of solar radiation to leaf litter decomposition among multiple arid-zone species.

    Science.gov (United States)

    Pan, Xu; Song, Yao-Bin; Liu, Guo-Fang; Hu, Yu-Kun; Ye, Xue-Hua; Cornwell, William K; Prinzing, Andreas; Dong, Ming; Cornelissen, Johannes H C

    2015-08-18

    In arid zones, strong solar radiation has important consequences for ecosystem processes. To better understand carbon and nutrient dynamics, it is important to know the contribution of solar radiation to leaf litter decomposition of different arid-zone species. Here we investigated: (1) whether such contribution varies among plant species at given irradiance regime, (2) whether interspecific variation in such contribution correlates with interspecific variation in the decomposition rate under shade; and (3) whether this correlation can be explained by leaf traits. We conducted a factorial experiment to determine the effects of solar radiation and environmental moisture for the mass loss and the decomposition constant k-values of 13 species litters collected in Northern China. The contribution of solar radiation to leaf litter decomposition varied significantly among species. Solar radiation accelerated decomposition in particular in the species that already decompose quickly under shade. Functional traits, notably specific leaf area, might predict the interspecific variation in that contribution. Our results provide the first empirical evidence for how the effect of solar radiation on decomposition varies among multiple species. Thus, the effect of solar radiation on the carbon flux between biosphere and atmosphere may depend on the species composition of the vegetation.

  15. Raman intensity and vibrational modes of armchair CNTs

    Science.gov (United States)

    Hur, Jaewoong; Stuart, Steven J.

    2017-07-01

    Raman intensity changes and frequency patterns have been studied using the various armchair (n, n) to understand the variations of bond polarizability, in regard to changing diameters, lengths, and the number of atoms in the (n, n). The Raman intensity trends of the (n, n) are validated by those of Cn isomers. For frequency trends, similar frequency patterns and frequency inward shifts for the (n, n) are characterized. Also, VDOS trends of the (n, n) expressing Raman modes are interpreted. The decomposition of vibrational modes in the (n, n) into radial, longitudinal, and tangential mode is beneficially used to recognize the distinct characteristics of vibrational modes.

  16. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  17. Radar Measurements of Ocean Surface Waves using Proper Orthogonal Decomposition

    Science.gov (United States)

    2017-03-30

    Golinval, 2002, Physical interpretation of the proper orthogonal modes using the singular value decomposition, Journal of Sound and Vibration, 249...complex and contain contributions from the environment (e.g., wind, waves, currents) as well as artifacts associated with electromagnetic (EM) (wave...Although there is no physical basis/ interpretation inherent to the method because it is purely a mathematical tool, there has been an increasing

  18. Gas hydrates forming and decomposition conditions analysis

    Directory of Open Access Journals (Sweden)

    А. М. Павленко

    2017-07-01

    Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.

  19. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  20. On the bi-dimensional variational decomposition applied to nonstationary vibration signals for rolling bearing crack detection in coal cutters

    International Nuclear Information System (INIS)

    Jiang, Yu; Li, Zhixiong; Zhang, Chao; Peng, Z; Hu, Chao

    2016-01-01

    This work aims to detect rolling bearing cracks using a variational approach. An original method that appropriately incorporates bi-dimensional variational mode decomposition (BVMD) into discriminant diffusion maps (DDM) is proposed to analyze the nonstationary vibration signals recorded from the cracked rolling bearings in coal cutters. The advantage of this variational decomposition based diffusion map (VDDM) method in comparison to the current DDM is that the intrinsic vibration mode of the crack can be filtered into a limited bandwidth in the frequency domain with an estimated central frequency, thus discarding the interference signal components in the vibration signals and significantly improving the crack detection performance. In addition, the VDDM is able to simultaneously process two-channel sensor signals to reduce information leakage. Experimental validation using rolling bearing crack vibration signals demonstrates that the VDDM separated the raw signals into four intrinsic modes, including one roller vibration mode, one roller cage vibration mode, one inner race vibration mode, and one outer race vibration mode. Hence, reliable fault features were extracted from the outer race vibration mode, and satisfactory crack identification performance was achieved. The comparison between the proposed VDDM and existing approaches indicated that the VDDM method was more efficient and reliable for crack detection in coal cutter rolling bearings. As an effective catalyst for rolling bearing crack detection, this newly proposed method is useful for practical applications. (paper)

  1. Decomposition of toxicity emission changes on the demand and supply sides: empirical study of the US industrial sector

    Science.gov (United States)

    Fujii, Hidemichi; Okamoto, Shunsuke; Kagawa, Shigemi; Managi, Shunsuke

    2017-12-01

    This study investigated the changes in the toxicity of chemical emissions from the US industrial sector over the 1998-2009 period. Specifically, we employed a multiregional input-output analysis framework and integrated a supply-side index decomposition analysis (IDA) with a demand-side structural decomposition analysis (SDA) to clarify the main drivers of changes in the toxicity of production- and consumption-based chemical emissions. The results showed that toxic emissions from the US industrial sector decreased by 83% over the studied period because of pollution abatement efforts adopted by US industries. A variety of pollution abatement efforts were used by different industries, and cleaner production in the mining sector and the use of alternative materials in the manufacture of transportation equipment represented the most important efforts.

  2. Radiation-induced decomposition of small amounts of trichloroethylene in drinking water

    International Nuclear Information System (INIS)

    Proksch, E.; Gehringer, P.; Szinovatz, W.; Eschweiler, H.

    1989-01-01

    Solutions of 10 ppm trichloroethylene in air-saturated drinking waters are decomposed by γ radiation with initial G-values, G 0 , around 3-5 molecules per 100 eV. At lower concentrations, the G 0 -values decrease with decreasing trichloroethylene concentration and with increasing amounts of inorganic (especially HCO 3 - ) and organic solutes. From the results, a semi-empirical formula is derived which allows an estimation of G 0 -values for the trichloroethylene decomposition in drinking waters of given composition. (author)

  3. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    Science.gov (United States)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  4. A data-driven decomposition approach to model aerodynamic forces on flapping airfoils

    Science.gov (United States)

    Raiola, Marco; Discetti, Stefano; Ianiro, Andrea

    2017-11-01

    In this work, we exploit a data-driven decomposition of experimental data from a flapping airfoil experiment with the aim of isolating the main contributions to the aerodynamic force and obtaining a phenomenological model. Experiments are carried out on a NACA 0012 airfoil in forward flight with both heaving and pitching motion. Velocity measurements of the near field are carried out with Planar PIV while force measurements are performed with a load cell. The phase-averaged velocity fields are transformed into the wing-fixed reference frame, allowing for a description of the field in a domain with fixed boundaries. The decomposition of the flow field is performed by means of the POD applied on the velocity fluctuations and then extended to the phase-averaged force data by means of the Extended POD approach. This choice is justified by the simple consideration that aerodynamic forces determine the largest contributions to the energetic balance in the flow field. Only the first 6 modes have a relevant contribution to the force. A clear relationship can be drawn between the force and the flow field modes. Moreover, the force modes are closely related (yet slightly different) to the contributions of the classic potential models in literature, allowing for their correction. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P.

  5. Protein-Ligand Empirical Interaction Components for Virtual Screening.

    Science.gov (United States)

    Yan, Yuna; Wang, Weijun; Sun, Zhaoxi; Zhang, John Z H; Ji, Changge

    2017-08-28

    A major shortcoming of empirical scoring functions is that they often fail to predict binding affinity properly. Removing false positives of docking results is one of the most challenging works in structure-based virtual screening. Postdocking filters, making use of all kinds of experimental structure and activity information, may help in solving the issue. We describe a new method based on detailed protein-ligand interaction decomposition and machine learning. Protein-ligand empirical interaction components (PLEIC) are used as descriptors for support vector machine learning to develop a classification model (PLEIC-SVM) to discriminate false positives from true positives. Experimentally derived activity information is used for model training. An extensive benchmark study on 36 diverse data sets from the DUD-E database has been performed to evaluate the performance of the new method. The results show that the new method performs much better than standard empirical scoring functions in structure-based virtual screening. The trained PLEIC-SVM model is able to capture important interaction patterns between ligand and protein residues for one specific target, which is helpful in discarding false positives in postdocking filtering.

  6. A four-stage hybrid model for hydrological time series forecasting.

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  7. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  8. Improving the Remote Sensing Retrieval of Phytoplankton Functional Types (PFT Using Empirical Orthogonal Functions: A Case Study in a Coastal Upwelling Region

    Directory of Open Access Journals (Sweden)

    Marco Correa-Ramirez

    2018-03-01

    Full Text Available An approach that improves the spectral-based PHYSAT method for identifying phytoplankton functional types (PFT in satellite ocean-color imagery is developed and applied to one study case. This new approach, called PHYSTWO, relies on the assumption that the dominant effect of chlorophyll-a (Chl-a in the normalized water-leaving radiance (nLw spectrum can be effectively isolated from the signal of accessory pigment biomarkers of different PFT by using Empirical Orthogonal Function (EOF decomposition. PHYSTWO operates in the dimensionless plane composed by the first two EOF modes generated through the decomposition of a space–nLw matrix at seven wavelengths (412, 443, 469, 488, 531, 547, and 555 nm. PFT determination is performed using orthogonal models derived from the acceptable ranges of anomalies proposed by PHYSAT but adjusted with the available regional and global data. In applying PHYSTWO to study phytoplankton community structures in the coastal upwelling system off central Chile, we find that this method increases the accuracy of PFT identification, extends the application of this tool to waters with high Chl-a concentration, and significantly decreases (~60% the undetermined retrievals when compared with PHYSAT. The improved accuracy of PHYSTWO and its applicability for the identification of new PFT are discussed.

  9. Proper orthogonal decomposition applied to laminar thermal convection in a vertical two plate channel

    International Nuclear Information System (INIS)

    Alvarez-Herrera, C; Murillo-Ramírez, J G; Pérez-Reyes, I; Moreno-Hernández, D

    2015-01-01

    This work reports the thermal convection with imposed shear flow in a thin two-plate channel. Flow structures are investigated under heating asymmetric conditions and different laminar flow conditions. The dynamics of heat flow and the energy distribution were determined by visualization with the Schlieren technique and application of the proper orthogonal decomposition (POD) method. The obtained results from the POD mode analysis revealed that for some flow conditions the heat transfer is related to the energy of the POD modes and their characteristic numbers. It was possible to detect periodic motion in the two-plate channel flow from the POD mode analysis. It was also found that when the energy is distributed among many POD modes, the fluid flow is disorganized and unsteady. (paper)

  10. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  11. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  12. Fault diagnosis of rotating machinery using an improved HHT based on EEMD and sensitive IMFs

    International Nuclear Information System (INIS)

    Lei, Yaguo; Zuo, Ming J

    2009-01-01

    A Hilbert–Huang transform (HHT) is a time–frequency technique and has been widely applied to analyzing vibration signals in the field of fault diagnosis of rotating machinery. It analyzes the vibration signals using intrinsic mode functions (IMFs) extracted using empirical mode decomposition (EMD). However, EMD sometimes cannot reveal the signal characteristics accurately because of the problem of mode mixing. Ensemble empirical mode decomposition (EEMD) was developed recently to alleviate this problem. The IMFs generated by EEMD have different sensitivity to faults. Some IMFs are sensitive and closely related to the faults but others are irrelevant. To enhance the accuracy of the HHT in fault diagnosis of rotating machinery, an improved HHT based on EEMD and sensitive IMFs is proposed in this paper. Simulated signals demonstrate the effectiveness of the improved HHT in diagnosing the faults of rotating machinery. Finally, the improved HHT is applied to diagnosing an early rub-impact fault of a heavy oil catalytic cracking machine set, and the application results prove that the improved HHT is superior to the HHT based on all IMFs of EMD

  13. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  14. Laser Covariance Vibrometry for Unsymmetrical Mode Detection

    Science.gov (United States)

    2006-09-01

    2 CMIF Complex Modal Indicator Function . . . . . . . . . . . . . 2 FDAC Frequency Domain Acceptance Criterion . . . . . . . . . . 2 OEM’s Original...complex modal indicator function ( CMIF ) [23] a set of singular value decomposition response functions and the frequency domain acceptance criterion...AFITGEENP03-02. 59. Phillips, Allyn W., Randall J. Allemang, and William A. Fladung. “The Complex Mode Indicator Function ( CMIF ) as a Parameter

  15. Nonlinear Dynamical Modes as a Basis for Short-Term Forecast of Climate Variability

    Science.gov (United States)

    Feigin, A. M.; Mukhin, D.; Gavrilov, A.; Seleznev, A.; Loskutov, E.

    2017-12-01

    We study abilities of data-driven stochastic models constructed by nonlinear dynamical decomposition of spatially distributed data to quantitative (short-term) forecast of climate characteristics. We compare two data processing techniques: (i) widely used empirical orthogonal function approach, and (ii) nonlinear dynamical modes (NDMs) framework [1,2]. We also make comparison of two kinds of the prognostic models: (i) traditional autoregression (linear) model and (ii) model in the form of random ("stochastic") nonlinear dynamical system [3]. We apply all combinations of the above-mentioned data mining techniques and kinds of models to short-term forecasts of climate indices based on sea surface temperature (SST) data. We use NOAA_ERSST_V4 dataset (monthly SST with space resolution 20 × 20) covering the tropical belt and starting from the year 1960. We demonstrate that NDM-based nonlinear model shows better prediction skill versus EOF-based linear and nonlinear models. Finally we discuss capability of NDM-based nonlinear model for long-term (decadal) prediction of climate variability. [1] D. Mukhin, A. Gavrilov, E. Loskutov , A.Feigin, J.Kurths, 2015: Principal nonlinear dynamical modes of climate variability, Scientific Reports, rep. 5, 15510; doi: 10.1038/srep15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J., 2016: Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101. [3] Ya. Molkov, D. Mukhin, E. Loskutov, A. Feigin, 2012: Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.

  16. Travel Mode Use, Travel Mode Shift and Subjective Well-Being: Overview of Theories, Empirical Findings and Policy Implications

    NARCIS (Netherlands)

    Ettema, D.F.; Friman, M.; Gärling, Tommy; Olsson, Lars

    2016-01-01

    This chapter discusses how travel by different travel modes is related to primarily subjective well-being but also to health or physical well-being. Studies carried out in different geographic contexts consistently show that satisfaction with active travel modes is higher than travel by car and

  17. Neutron small-angle scattering study of phase decomposition in Au-Pt

    International Nuclear Information System (INIS)

    Singhal, S.P.; Herman, H.

    1978-01-01

    Isothermal decomposition of a Au-60 at.% Pt alloy, quenched from the solid as well as the liquid state, has been studied with the D11 neutron small-angle scattering spectrometer at ILL, Grenoble. An incident neutron wavelength of 6.7 A was used and measurements were carried out in the range of scattering vector [β=4π sin theta/lambda] from 2.8x10 -2 to 21x10 -2 A -1 . The preliminary results indicate that decomposition of this alloy at 550 0 C takes place by a spinodal mode, although deviations were observed from linear spinodal theory, even at very early times. Slower aging kinetics were observed in liquid-quenched alloy as compared with solid-quenched. Liquid quenching is more efficient in suppressing quench clustering than is solid quenching. However, liquid quenching yields an extremely fine-grained material, which thereby enhances discontinuous precipitation at grain boundaries, competing with decomposition in the bulk. A Rundman-Hilliard analysis was used for the early stages of the spinodal reaction to obtain an interdiffusion coefficient of the order of 10 -16 cm 2 s -1 at 550 0 C for the solid-quenched alloy. (Auth.)

  18. Decomposition of thermally unstable substances in film evaporators

    Energy Technology Data Exchange (ETDEWEB)

    Matz, G

    1982-10-01

    It is widely known that film evaporators are considered to permit really gentle evaporation of heat-sensitive substances. Nevertheless, decomposition of such substance still occurs to an extent depending upon the design and operation of the evaporator. In the following a distinction is made between evaporators with films not generated mechanically, namely the long tube evaporator (lTE) or climbing film evaporator, the falling film evaporator (FFE) and the multiple phase helical tube (MPT) or helical coil evaporators (TFE). Figs 1 and 2 illustrate the mode of operation. A theory of the decomposition of thermally unstable substances in these evaporators is briefly outlined and compared with measurements. Such a theory cannot be developed without any experimental checks; on the other hand, meausrements urgently need a theoretical basis if only to establish what actually has to be measured. All experiments are made with a system of readily adjustable decomposability, namely with aqueous solutions of saccharose; the thermal inversion of this compound can be controlled by addition of various amounts or concentrations of hydrochloric acid. In the absence of any catalysis by hydrochloric acid, the decomposition rates within in the temperature interval studied (60-130/sup 0/C) are so low that the experiments would take much too long and determination of the concentration differences (generally by polarimetric methods) would be very complicated. Such slight effects would also be very unfavourable for comparison with theory. (orig.)

  19. Three-photon polarization ququarts: polarization, entanglement and Schmidt decompositions

    International Nuclear Information System (INIS)

    Fedorov, M V; Miklin, N I

    2015-01-01

    We consider polarization states of three photons, propagating collinearly and having equal given frequencies but with arbitrary distributed horizontal or vertical polarizations of photons. A general form of such states is a superposition of four basic three-photon polarization modes, to be referred to as the three-photon polarization ququarts (TPPQ). All such states can be considered as consisting of one- and two-photon parts, which can be entangled with each other. The degrees of entanglement and polarization, as well as the Schmidt decomposition and Stokes vectors of TPPQ are found and discussed. (paper)

  20. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  1. Koopman decomposition of Burgers' equation: What can we learn?

    Science.gov (United States)

    Page, Jacob; Kerswell, Rich

    2017-11-01

    Burgers' equation is a well known 1D model of the Navier-Stokes equations and admits a selection of equilibria and travelling wave solutions. A series of Burgers' trajectories are examined with Dynamic Mode Decomposition (DMD) to probe the capability of the method to extract coherent structures from ``run-down'' simulations. The performance of the method depends critically on the choice of observable. We use the Cole-Hopf transformation to derive an observable which has linear, autonomous dynamics and for which the DMD modes overlap exactly with Koopman modes. This observable can accurately predict the flow evolution beyond the time window of the data used in the DMD, and in that sense outperforms other observables motivated by the nonlinearity in the governing equation. The linearizing observable also allows us to make informed decisions about often ambiguous choices in nonlinear problems, such as rank truncation and snapshot spacing. A number of rules of thumb for connecting DMD with the Koopman operator for nonlinear PDEs are distilled from the results. Related problems in low Reynolds number fluid turbulence are also discussed.

  2. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  3. Decoding Mode-mixing in Black-hole Merger Ringdown

    Science.gov (United States)

    Kelly, Bernard J.; Baker, John G.

    2013-01-01

    Optimal extraction of information from gravitational-wave observations of binary black-hole coalescences requires detailed knowledge of the waveforms. Current approaches for representing waveform information are based on spin-weighted spherical harmonic decomposition. Higher-order harmonic modes carrying a few percent of the total power output near merger can supply information critical to determining intrinsic and extrinsic parameters of the binary. One obstacle to constructing a full multi-mode template of merger waveforms is the apparently complicated behavior of some of these modes; instead of settling down to a simple quasinormal frequency with decaying amplitude, some |m| = modes show periodic bumps characteristic of mode-mixing. We analyze the strongest of these modes the anomalous (3, 2) harmonic mode measured in a set of binary black-hole merger waveform simulations, and show that to leading order, they are due to a mismatch between the spherical harmonic basis used for extraction in 3D numerical relativity simulations, and the spheroidal harmonics adapted to the perturbation theory of Kerr black holes. Other causes of mode-mixing arising from gauge ambiguities and physical properties of the quasinormal ringdown modes are also considered and found to be small for the waveforms studied here.

  4. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  5. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  6. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Seshadhri, Comandur [The Ohio State Univ., Columbus, OH (United States); Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sariyuce, Ahmet Erdem [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States)

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  7. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  8. Analysis of Optical Fiber Complex Propagation Matrix on the Basis of Vortex Modes

    DEFF Research Database (Denmark)

    Lyubopytov, Vladimir S.; Tatarczak, Anna; Lu, Xiaofeng

    2016-01-01

    We propose and experimentally demonstrate a novel method for reconstruction of the complex propagation matrix of optical fibers supporting propagation of multiple vortex modes. This method is based on the azimuthal decomposition approach and allows the complex matrix elements to be determined...... by direct calculations. We apply the proposed method to demonstrate the feasibility of optical compensation for coupling between vortex modes in optical fiber....

  9. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, Hidekazu; Jørgensen, Sten Bay

    2012-01-01

    The specification of supervision and control tasks in complex processes requires definition of plant states on various levels of abstraction related to plant operation in start-up, normal operation and shut-down. Modes of plant operation are often specified in relation to a plant decomposition in...... for the Japanese fast breeder reactor plant MONJU....

  10. Multiplexing of spatial modes in the mid-IR region

    Science.gov (United States)

    Gailele, Lucas; Maweza, Loyiso; Dudley, Angela; Ndagano, Bienvenu; Rosales-Guzman, Carmelo; Forbes, Andrew

    2017-02-01

    Traditional optical communication systems optimize multiplexing in polarization and wavelength both trans- mitted in fiber and free-space to attain high bandwidth data communication. Yet despite these technologies, we are expected to reach a bandwidth ceiling in the near future. Communications using orbital angular momentum (OAM) carrying modes offers infinite dimensional states, providing means to increase link capacity by multiplexing spatially overlapping modes in both the azimuthal and radial degrees of freedom. OAM modes are multiplexed and de-multiplexed by the use of spatial light modulators (SLM). Implementation of complex amplitude modulation is employed on laser beams phase and amplitude to generate Laguerre-Gaussian (LG) modes. Modal decomposition is employed to detect these modes due to their orthogonality as they propagate in space. We demonstrate data transfer by sending images as a proof-of concept in a lab-based scheme. We demonstrate the creation and detection of OAM modes in the mid-IR region as a precursor to a mid-IR free-space communication link.

  11. Mode analysis with a spatial light modulator as a correlation filter

    CSIR Research Space (South Africa)

    Flamm, D

    2012-07-01

    Full Text Available the initial field is decomposed. This approach allows any function to be encoded and refreshed in real time (60 Hz). We implement a decomposition of guided modes propagating in optical fibers and show that we can successfully reconstruct the observed field...

  12. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  13. Investigation by Raman Spectroscopy of the Decomposition Process of HKUST-1 upon Exposure to Air

    Directory of Open Access Journals (Sweden)

    Michela Todaro

    2016-01-01

    Full Text Available We report an experimental investigation by Raman spectroscopy of the decomposition process of Metal-Organic Framework (MOF HKUST-1 upon exposure to air moisture (T=300 K, 70% relative humidity. The data collected here are compared with the indications obtained from a model of the process of decomposition of this material proposed in literature. In agreement with that model, the reported Raman measurements indicate that for exposure times longer than 20 days relevant irreversible processes take place, which are related to the occurrence of the hydrolysis of Cu-O bonds. These processes induce small but detectable variations of the peak positions and intensities of the main Raman bands of the material, which can be related to Cu-Cu, Cu-O, and O-C-O stretching modes. The critical analyses of these changes have permitted us to obtain a more detailed description of the process of decomposition taking place in HKUST-1 upon interaction with moisture. Furthermore, the reported Raman data give further strong support to the recently proposed model of decomposition of HKUST-1, contributing significantly to the development of a complete picture of the properties of this considerable deleterious effect.

  14. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  15. Infrared absorption study of ammonium uranates and uranium oxide powders formed during their thermal decomposition

    International Nuclear Information System (INIS)

    Rofail, N.H.; ELfekey, S.A.

    1992-01-01

    Ammonium uranates (AU) were precipitated from a nuclear-pure uranyl nitrate solution using different precipitating agents. IR spectra of the obtained uranates and oxides formed during their thermal decomposition have been studied. The results indicated that the precipitating agent, mode of stirring, washing and calcining temperature are important factors for a specific oxide formation.4 FIG., 3 TAB

  16. An Improved Method Based on CEEMD for Fault Diagnosis of Rolling Bearing

    Directory of Open Access Journals (Sweden)

    Meijiao Li

    2014-11-01

    Full Text Available In order to improve the effectiveness for identifying rolling bearing faults at an early stage, the present paper proposed a method that combined the so-called complementary ensemble empirical mode decomposition (CEEMD method with a correlation theory for fault diagnosis of rolling element bearing. The cross-correlation coefficient between the original signal and each intrinsic mode function (IMF was calculated in order to reduce noise and select an effective IMF. Using the present method, a rolling bearing fault experiment with vibration signals measured by acceleration sensors was carried out, and bearing inner race and outer race defect at a varying rotating speed with different degrees of defect were analyzed. And the proposed method was compared with several algorithms of empirical mode decomposition (EMD to verify its effectiveness. Experimental results showed that the proposed method was available for detecting the bearing faults and able to detect the fault at an early stage. It has higher computational efficiency and is capable of overcoming modal mixing and aliasing. Therefore, the proposed method is more suitable for rolling bearing diagnosis.

  17. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    Science.gov (United States)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  18. Multifractal Detrended Fluctuation Analysis of Regional Precipitation Sequences Based on the CEEMDAN-WPT

    Science.gov (United States)

    Liu, Dong; Cheng, Chen; Fu, Qiang; Liu, Chunlei; Li, Mo; Faiz, Muhammad Abrar; Li, Tianxiao; Khan, Muhammad Imran; Cui, Song

    2018-03-01

    In this paper, the complete ensemble empirical mode decomposition with the adaptive noise (CEEMDAN) algorithm is introduced into the complexity research of precipitation systems to improve the traditional complexity measure method specific to the mode mixing of the Empirical Mode Decomposition (EMD) and incomplete decomposition of the ensemble empirical mode decomposition (EEMD). We combined the CEEMDAN with the wavelet packet transform (WPT) and multifractal detrended fluctuation analysis (MF-DFA) to create the CEEMDAN-WPT-MFDFA, and used it to measure the complexity of the monthly precipitation sequence of 12 sub-regions in Harbin, Heilongjiang Province, China. The results show that there are significant differences in the monthly precipitation complexity of each sub-region in Harbin. The complexity of the northwest area of Harbin is the lowest and its predictability is the best. The complexity and predictability of the middle and Midwest areas of Harbin are about average. The complexity of the southeast area of Harbin is higher than that of the northwest, middle, and Midwest areas of Harbin and its predictability is worse. The complexity of Shuangcheng is the highest and its predictability is the worst of all the studied sub-regions. We used terrain and human activity as factors to analyze the causes of the complexity of the local precipitation. The results showed that the correlations between the precipitation complexity and terrain are obvious, and the correlations between the precipitation complexity and human influence factors vary. The distribution of the precipitation complexity in this area may be generated by the superposition effect of human activities and natural factors such as terrain, general atmospheric circulation, land and sea location, and ocean currents. To evaluate the stability of the algorithm, the CEEMDAN-WPT-MFDFA was compared with the equal probability coarse graining LZC algorithm, fuzzy entropy, and wavelet entropy. The results show

  19. Vibrational Order, Structural Properties, and Optical Gap of ZnO Nanostructures Sintered through Thermal Decomposition

    Directory of Open Access Journals (Sweden)

    Alejandra Londono-Calderon

    2014-01-01

    Full Text Available The sintering of different ZnO nanostructures by the thermal decomposition of zinc acetate is reported. Morphological changes from nanorods to nanoparticles are exhibited with the increase of the decomposition temperature from 300 to 500°C. The material showed a loss in the crystalline order with the increase in the temperature, which is correlated to the loss of oxygen due to the low heating rate used. Nanoparticles have a greater vibrational freedom than nanorods which is demonstrated in the rise of the main Raman mode E 2(high during the transformation. The energy band gap of the nanostructured material is lower than the ZnO bulk material and decreases with the rise in the temperature.

  20. Influence of mode competition on beam quality of fiber amplifier

    International Nuclear Information System (INIS)

    Xiao Qi-Rong; Yan Ping; Sun Jun-Yi; Chen Xiao; Ren Hai-Cui; Gong Ma-Li

    2014-01-01

    Theoretical and experimental studies of the influence of the mode competition on the output beam quality of fiber amplifiers are presented. Rate equations and modal decomposition method are used in the theoretical model. In the experiment, the output beam-quality factor of a fiber amplifier, which is based on a Yb-doped double-clad large mode area fiber as a function of the seed beam quality and the pump power of the amplifier, is measured. The experimental results are consistent with the theoretical analysis. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  1. A technique for plasma velocity-space cross-correlation

    Science.gov (United States)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  2. Turbulence time series data hole filling using Karhunen-Loeve and ARIMA methods

    International Nuclear Information System (INIS)

    Chang, M P J L; Nazari, H; Font, C O; Gilbreath, G C; Oh, E

    2007-01-01

    Measurements of optical turbulence time series data using unattended instruments over long time intervals inevitably lead to data drop-outs or degraded signals. We present a comparison of methods using both Principal Component Analysis, which is also known as the Karhunen-Loeve decomposition, and ARIMA that seek to correct for these event-induced and mechanically-induced signal drop-outs and degradations. We report on the quality of the correction by examining the Intrinsic Mode Functions generated by Empirical Mode Decomposition. The data studied are optical turbulence parameter time series from a commercial long path length optical anemometer/scintillometer, measured over several hundred metres in outdoor environments

  3. Transaction cost determinants and ownership-based entry mode choice: a meta-analytical review

    OpenAIRE

    Hongxin Zhao; Yadong Luo; Taewon Suh

    2004-01-01

    Entry mode choice is a critical ingredient of international entry strategies, and has been voluminously examined in the field. The findings, however, are very mixed, especially with respect to transaction-cost-related factors in determining the ownership-based entry mode choice. This study conducted a meta-analysis to quantitatively summarize the literature and empirically generalize more conclusive findings. Based on the 106 effect sizes of 38 empirical studies, the meta-analysis shows that ...

  4. A New Method for Non-linear and Non-stationary Time Series Analysis:
    The Hilbert Spectral Analysis

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    A new method for analysing non-linear and non-stationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero crossing and extreme, and also having symmetric envelopes defined by the local maximal and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to non-linear and non-stationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Classical non-l...

  5. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    Czech Academy of Sciences Publication Activity Database

    Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008

  6. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  7. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  8. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  9. Ozone time scale decomposition and trend assessment from surface observations in National Parks of the United States

    Science.gov (United States)

    Mao, H.; McGlynn, D. F.; Wu, Z.; Sive, B. C.

    2017-12-01

    A time scale decomposition technique, the Ensemble Empirical Mode Decomposition (EEMD), has been employed to decompose the time scales in long-term ozone measurement data at 24 US National Park Service sites. Time scales of interest include the annual cycle, variability by large scale climate oscillations, and the long-term trend. The implementation of policy regulations was found to have had a greater effect on sites nearest to urban regions. Ozone daily mean values increased until around the late 1990s followed by decreasing trends during the ensuing decades for sites in the East, southern California, and northwestern Washington. Sites in the Midwest did not experience a reversal of trends from positive to negative until the mid- to late 2000s. The magnitude of the annual amplitude decreased for nine sites and increased for three sites. Stronger decreases in the annual amplitude occurred in the East, with more sites in the East experiencing decreases in annual amplitude than in the West. The date of annual ozone peaks and minimums has changed for 12 sites in total, but those with a shift in peak date did not necessarily have a shift in the trough date. There appeared to be a link between peak dates occurring earlier and a decrease in the annual amplitude. This is likely related to a decrease in ozone titration due to NOx emission reductions. Furthermore, it was found that the shift in the Pacific Decadal Oscillation (PDO) regime from positive to negative in 1998-1999 resulting in an increase in occurrences of La Niña-like conditions had the effect of directing more polluted air masses from East Asia to higher latitudes over North America. This change in PDO regime was likely one main factor causing the increase in ozone concentrations on all time scales at an Alaskan site DENA-HQ.

  10. Towards a paradigm shift in the modeling of soil organic carbon decomposition for earth system models

    Science.gov (United States)

    He, Yujie

    Soils are the largest terrestrial carbon pools and contain approximately 2200 Pg of carbon. Thus, the dynamics of soil carbon plays an important role in the global carbon cycle and climate system. Earth System Models are used to project future interactions between terrestrial ecosystem carbon dynamics and climate. However, these models often predict a wide range of soil carbon responses and their formulations have lagged behind recent soil science advances, omitting key biogeochemical mechanisms. In contrast, recent mechanistically-based biogeochemical models that explicitly account for microbial biomass pools and enzyme kinetics that catalyze soil carbon decomposition produce notably different results and provide a closer match to recent observations. However, a systematic evaluation of the advantages and disadvantages of the microbial models and how they differ from empirical, first-order formulations in soil decomposition models for soil organic carbon is still needed. This dissertation consists of a series of model sensitivity and uncertainty analyses and identifies dominant decomposition processes in determining soil organic carbon dynamics. Poorly constrained processes or parameters that require more experimental data integration are also identified. This dissertation also demonstrates the critical role of microbial life-history traits (e.g. microbial dormancy) in the modeling of microbial activity in soil organic matter decomposition models. Finally, this study surveys and synthesizes a number of recently published microbial models and provides suggestions for future microbial model developments.

  11. Un système de vérification de signature manuscrite en ligne basé ...

    African Journals Online (AJOL)

    Administrateur

    online handwritten signature verification system. We model the handwritten signature by an analytical approach based on the Empirical Mode Decomposition (EMD). The organized system is provided with a training module and a base of signatures. The implemented evaluation protocol points out the interest of the adopted ...

  12. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. HEMANTHA KUMAR. Articles written in Sadhana. Volume 42 Issue 7 July 2017 pp 1143-1153. Engine gearbox fault diagnosis using empirical mode decomposition method and Naıve Bayes algorithm · KIRAN VERNEKAR HEMANTHA KUMAR K V GANGADHARAN · More Details Abstract ...

  13. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    Science.gov (United States)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  14. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  15. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  16. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  17. Application of microscopy technology in thermo-catalytic methane decomposition to hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Irene Lock Sow, E-mail: irene.sowmei@gmail.com; Lock, S. S. M., E-mail: serenelock168@gmail.com; Abdullah, Bawadi, E-mail: bawadi-abdullah@petronas.com.my [Chemical Engineering Department, Universiti Teknologi PETRONAS, Bandar Sri Iskandar, 31750, Perak (Malaysia)

    2015-07-22

    Hydrogen production from the direct thermo-catalytic decomposition of methane is a promising alternative for clean fuel production because it produces pure hydrogen without any CO{sub x} emissions. However, thermal decomposition of methane can hardly be of any practical and empirical interest in the industry unless highly efficient and effective catalysts, in terms of both specific activity and operational lifetime have been developed. In this work, bimetallic Ni-Pd on gamma alumina support have been developed for methane cracking process by using co-precipitation and incipient wetness impregnation method. The calcined catalysts were characterized to determine their morphologies and physico-chemical properties by using Brunauer-Emmett-Teller method, Field Emission Scanning Electron Microscopy, Energy-dispersive X-ray spectroscopy and Thermogravimetric Analysis. The results suggested that that the catalyst which is prepared by the co-precipitation method exhibits homogeneous morphology, higher surface area, have uniform nickel and palladium dispersion and higher thermal stability as compared to the catalyst which is prepared by wet impregnation method. This characteristics are significant to avoid deactivation of the catalysts due to sintering and carbon deposition during methane cracking process.

  18. Humidity effects on surface dielectric barrier discharge for gaseous naphthalene decomposition

    Science.gov (United States)

    Abdelaziz, Ayman A.; Ishijima, Tatsuo; Seto, Takafumi

    2018-04-01

    Experiments are performed using dry and humid air to clarify the effects of water vapour on the characteristics of surface dielectric barrier discharge (SDBD) and investigate its impact on the performance of the SDBD for decomposition of gaseous naphthalene in air stream. The current characteristics, including the discharge and the capacitive currents, are deeply analyzed and the discharge mechanism is explored. The results confirmed that the humidity affected the microdischarge distribution without affecting the discharge mode. Interestingly, it is found that the water vapour had a significant influence on the capacitance of the reactor due to its deposition on the discharge electrode and the dielectric, which, in turn, affects the power loss in the dielectric and the total power consumed in the reactor. Thus, the factor of the humidity effect on the power loss in the dielectric should be considered in addition to its effect on the attachment coefficient. Additionally, there was an optimum level of the humidity for the decomposition of naphthalene in the SDBD, and its value depended on the gas composition, where the maximum naphthalene decomposition efficiency in O2/H2O is achieved at the humidity level ˜10%, which was lower than that obtained in air/H2O (˜28%). The results also revealed that the role of the humidity in the decomposition efficiency was not significant in the humidified O2 at high power level. This was attributed to the significant increase in oxygen-derived species (such as O atoms and O3) at high power, which was enough to overcome the negative effects of the humidity.

  19. Microbial decomposition of keratin in nature-a new hypothesis of industrial relevance.

    Science.gov (United States)

    Lange, Lene; Huang, Yuhong; Busk, Peter Kamp

    2016-03-01

    Discovery of keratin-degrading enzymes from fungi and bacteria has primarily focused on finding one protease with efficient keratinase activity. Recently, an investigation was conducted of all keratinases secreted from a fungus known to grow on keratinaceous materials, such as feather, horn, and hooves. The study demonstrated that a minimum of three keratinases is needed to break down keratin, an endo-acting, an exo-acting, and an oligopeptide-acting keratinase. Further, several studies have documented that disruption of sulfur bridges of the keratin structure acts synergistically with the keratinases to loosen the molecular structure, thus giving the enzymes access to their substrate, the protein structure. With such complexity, it is relevant to compare microbial keratin decomposition with the microbial decomposition of well-studied polymers such as cellulose and chitin. Interestingly, it was recently shown that the specialized enzymes, lytic polysaccharide monoxygenases (LPMOs), shown to be important for breaking the recalcitrance of cellulose and chitin, are also found in keratin-degrading fungi. A holistic view of the complex molecular self-assembling structure of keratin and knowledge about enzymatic and boosting factors needed for keratin breakdown have been used to formulate a hypothesis for mode of action of the LPMOs in keratin decomposition and for a model for degradation of keratin in nature. Testing such hypotheses and models still needs to be done. Even now, the hypothesis can serve as an inspiration for designing industrial processes for keratin decomposition for conversion of unexploited waste streams, chicken feather, and pig bristles into bioaccessible animal feed.

  20. Will the world run out of land? A Kaya-type decomposition to study past trends of cropland expansion

    Science.gov (United States)

    Huber, Veronika; Neher, Ina; Bodirsky, Benjamin L.; Höfner, Kathrin; Schellnhuber, Hans Joachim

    2014-01-01

    Globally, the further expansion of cropland is limited by the availability of adequate land and by the necessity to spare land for nature conservation and carbon sequestration. Analyzing the causes of past land-use changes can help to better understand the potential drivers of land scarcities of the future. Using the FAOSTAT database, we quantify the contribution of four major factors, namely human population growth, rising per-capita caloric consumption (including food intake and household waste), processing losses (including conversion of vegetal into animal products and non-food use of crops), and yield gains, to cropland expansion rates of the past (1961-2007). We employ a Kaya-type decomposition method that we have adapted to be applicable to drivers of cropland expansion at global and national level. Our results indicate that, all else equal, without the yield gains observed globally since 1961, additional land of the size of Australia would have been put under the plough by 2007. Under this scenario the planetary boundary on global cropland use would have already been transgressed today. By contrast, without rising per-capita caloric consumption and population growth since 1961, an area as large as nearly half and all of Australia could have been spared, respectively. Yield gains, with strongest contributions from maize, wheat and rice, have approximately offset the increasing demand of a growing world population. Analyses at the national scale reveal different modes of land-use transitions dependent on development stage, dietary standards, and international trade intensity of the countries. Despite some well-acknowledged caveats regarding the non-independence of decomposition factors, these results contribute to the empirical ranking of different drivers needed to set research priorities and prepare well-informed projections of land-use change until 2050 and beyond.

  1. Will the world run out of land? A Kaya-type decomposition to study past trends of cropland expansion

    International Nuclear Information System (INIS)

    Huber, Veronika; Neher, Ina; Bodirsky, Benjamin L; Schellnhuber, Hans Joachim; Höfner, Kathrin

    2014-01-01

    Globally, the further expansion of cropland is limited by the availability of adequate land and by the necessity to spare land for nature conservation and carbon sequestration. Analyzing the causes of past land-use changes can help to better understand the potential drivers of land scarcities of the future. Using the FAOSTAT database, we quantify the contribution of four major factors, namely human population growth, rising per-capita caloric consumption (including food intake and household waste), processing losses (including conversion of vegetal into animal products and non-food use of crops), and yield gains, to cropland expansion rates of the past (1961–2007). We employ a Kaya-type decomposition method that we have adapted to be applicable to drivers of cropland expansion at global and national level. Our results indicate that, all else equal, without the yield gains observed globally since 1961, additional land of the size of Australia would have been put under the plough by 2007. Under this scenario the planetary boundary on global cropland use would have already been transgressed today. By contrast, without rising per-capita caloric consumption and population growth since 1961, an area as large as nearly half and all of Australia could have been spared, respectively. Yield gains, with strongest contributions from maize, wheat and rice, have approximately offset the increasing demand of a growing world population. Analyses at the national scale reveal different modes of land-use transitions dependent on development stage, dietary standards, and international trade intensity of the countries. Despite some well-acknowledged caveats regarding the non-independence of decomposition factors, these results contribute to the empirical ranking of different drivers needed to set research priorities and prepare well-informed projections of land-use change until 2050 and beyond. (paper)

  2. Empirical seasonal forecasts of the NAO

    Science.gov (United States)

    Sanchezgomez, E.; Ortizbevia, M.

    2003-04-01

    We present here seasonal forecasts of the North Atlantic Oscillation (NAO) issued from ocean predictors with an empirical procedure. The Singular Values Decomposition (SVD) of the cross-correlation matrix between predictor and predictand fields at the lag used for the forecast lead is at the core of the empirical model. The main predictor field are sea surface temperature anomalies, although sea ice cover anomalies are also used. Forecasts are issued in probabilistic form. The model is an improvement over a previous version (1), where Sea Level Pressure Anomalies were first forecast, and the NAO Index built from this forecast field. Both correlation skill between forecast and observed field, and number of forecasts that hit the correct NAO sign, are used to assess the forecast performance , usually above those values found in the case of forecasts issued assuming persistence. For certain seasons and/or leads, values of the skill are above the .7 usefulness treshold. References (1) SanchezGomez, E. and Ortiz Bevia M., 2002, Estimacion de la evolucion pluviometrica de la Espana Seca atendiendo a diversos pronosticos empiricos de la NAO, in 'El Agua y el Clima', Publicaciones de la AEC, Serie A, N 3, pp 63-73, Palma de Mallorca, Spain

  3. An informatics based analysis of the impact of isotope substitution on phonon modes in graphene

    International Nuclear Information System (INIS)

    Broderick, Scott; Srinivasan, Srikant; Rajan, Krishna; Ray, Upamanyu; Balasubramanian, Ganesh

    2014-01-01

    It is shown by informatics that the high frequency short ranged modes exert a significant influence in impeding thermal transport through isotope substituted graphene nanoribbons. Using eigenvalue decomposition methods, we have extracted features in the phonon density of states spectra that reveal correlations between isotope substitution and phonon modes. This study also provides a data driven computational framework for the linking of materials chemistry and transport properties in 2D systems.

  4. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  5. Policy learning in the Eurozone crisis: modes, power and functionality.

    Science.gov (United States)

    Dunlop, Claire A; Radaelli, Claudio M

    In response to the attacks on the sovereign debt of some Eurozone countries, European Union (EU) leaders have created a set of preventive and corrective policy instruments to coordinate macro-economic policies and reforms. In this article, we deal with the European Semester, a cycle of information exchange, monitoring and surveillance. Countries that deviate from the targets are subjected to increasing monitoring and more severe 'corrective' interventions, in a pyramid of responsive exchanges between governments and EU institutions. This is supposed to generate coordination and convergence towards balanced economies via mechanisms of learning. But who is learning what? Can the EU learn in the 'wrong' mode? We contribute to the literature on theories of the policy process by showing how modes of learning can be operationalized and used in empirical analysis. We use policy learning as theoretical framework to establish empirically the prevalent mode of learning and its implications for both the power of the Commission and the normative question of whether the EU is learning in the 'correct' mode.

  6. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  7. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  8. The characteristics of polysaccharides fractions of sunflower obtained in dynamic mode

    International Nuclear Information System (INIS)

    Makhkamov, Kh.K.; Gorshkova, R.M.; Khalikova, S.

    2013-01-01

    Present article describes characteristics of polysaccharides fractions of sunflower obtained in dynamic mode. The decomposition of sunflower pectin was studied by means of continuous fractionation method in dynamic regime. It was found that the process is of extreme nature due to heterogeneity of its macromolecule structure. The additional information on macromolecule structure of sunflower pectin was obtained.

  9. Simultaneous tensor decomposition and completion using factor priors.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  10. Application of the Proper Orthogonal Decomposition to Turbulent Czochralski Convective Flows

    International Nuclear Information System (INIS)

    Rahal, S; Cerisier, P; Azuma, H

    2007-01-01

    The aim of this work is to study the general aspects of the convective flow instabilities in a simulated Czochralski system. We considered the influence of the buoyancy and crystal rotation. Velocity fields, obtained by an ultrasonic technique, the corresponding 2D Fourier spectra and a correlation function, have been used. Steady, quasi-periodic and turbulent flows, are successively recognized, as the Reynolds number was increased, for a fixed Rayleigh number. The orthogonal decomposition method was applied and the numbers of modes, involved in the dynamics of turbulent flows, calculated. As far as we know, this method has been used for the first time to study the Czochralski convective flows. This method provides also information on the most important modes and allows simple theoretical models to be established. The large rotation rates of the crystal were found to stabilize the flow, and conversely the temperature gradients destabilize the flow. Indeed, the increase of the rotation effects reduces the number of involved modes and oscillations, and conversely, as expected, the increase of the buoyancy effects induces more modes to be involved in the dynamics. Thus, the flow oscillations can be reduced either by increasing the crystal rotation rate to the adequate value, as shown in this study or by imposing a magnetic field

  11. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  12. Mode-2 social science knowledge production?

    DEFF Research Database (Denmark)

    Kropp, Kristoffer; Blok, Anders

    2011-01-01

    The notion of mode-2 knowledge production points to far-reaching transformations in science-society relations, but few attempts have been made to investigate what growing economic and political demands on research may entail for the social sciences. This case study of new patterns of social science...... knowledge production outlines some major institutional and cognitive changes in Danish academic sociology during 'mode-2' times, from the 1980s onwards. Empirically, we rely on documentary sources and qualitative interviews with Danish sociologists, aiming to reconstruct institutional trajectories...... show how a particular cognitive modality of sociology — 'welfare reflexivity' — has become a dominant form of Danish sociological knowledge production. Welfare reflexivity has proven a viable response to volatile mode-2 policy conditions....

  13. Assaying Used Nuclear Fuel Assemblies Using Lead Slowing-Down Spectroscopy and Singular Value Decomposition

    International Nuclear Information System (INIS)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2013-01-01

    This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated

  14. Decomposition of tetrachloroethylene by ionizing radiation

    International Nuclear Information System (INIS)

    Hakoda, T.; Hirota, K.; Hashimoto, S.

    1998-01-01

    Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation

  15. A Raman spectroscopic determination of the kinetics of decomposition of ammonium chromate (NH 4) 2CrO 4

    Science.gov (United States)

    De Waal, D.; Heyns, A. M.; Range, K.-J.

    1989-06-01

    Raman spectroscopy was used as a method in the kinetic investigation of the thermal decomposition of solid (NH 4) 2CrO 4. Time-dependent measurements of the intensity of the totally symmetric stretching CrO mode of (NH 4) 2CrO 4 have been made between 343 and 363 K. A short initial acceleratory period is observed at lower temperatures and the decomposition reaction decelerates after the maximum decomposition rate has been reached at all temperatures. These results can be interpreted in terms of the Avrami-Erofe'ev law 1 - (χ r) {1}/{2} = kt , where χr is the fraction of reactant at time t. At 358 K, k is equal to 1.76 ± 0.01 × 10 -3 sec -1 for microcrystals and for powdered samples. Activation energies of 97 ± 10 and 49 ± 0.9 kJ mole -1 have been calculated for microcrystalline and powdered samples, respectively.

  16. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  17. Thermal decomposition of γ-irradiated lead nitrate

    International Nuclear Information System (INIS)

    Nair, S.M.K.; Kumar, T.S.S.

    1990-01-01

    The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs

  18. Ozone time scale decomposition and trend assessment from surface observations

    Science.gov (United States)

    Boleti, Eirini; Hueglin, Christoph; Takahama, Satoshi

    2017-04-01

    Emissions of ozone precursors have been regulated in Europe since around 1990 with control measures primarily targeting to industries and traffic. In order to understand how these measures have affected air quality, it is now important to investigate concentrations of tropospheric ozone in different types of environments, based on their NOx burden, and in different geographic regions. In this study, we analyze high quality data sets for Switzerland (NABEL network) and whole Europe (AirBase) for the last 25 years to calculate long-term trends of ozone concentrations. A sophisticated time scale decomposition method, called the Ensemble Empirical Mode Decomposition (EEMD) (Huang,1998;Wu,2009), is used for decomposition of the different time scales of the variation of ozone, namely the long-term trend, seasonal and short-term variability. This allows subtraction of the seasonal pattern of ozone from the observations and estimation of long-term changes of ozone concentrations with lower uncertainty ranges compared to typical methodologies used. We observe that, despite the implementation of regulations, for most of the measurement sites ozone daily mean values have been increasing until around mid-2000s. Afterwards, we observe a decline or a leveling off in the concentrations; certainly a late effect of limitations in ozone precursor emissions. On the other hand, the peak ozone concentrations have been decreasing for almost all regions. The evolution in the trend exhibits some differences between the different types of measurement. In addition, ozone is known to be strongly affected by meteorology. In the applied approach, some of the meteorological effects are already captured by the seasonal signal and already removed in the de-seasonalized ozone time series. For adjustment of the influence of meteorology on the higher frequency ozone variation, a statistical approach based on Generalized Additive Models (GAM) (Hastie,1990;Wood,2006), which corrects for meteorological

  19. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  20. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  1. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  2. A low-dimensional tool for predicting force decomposition coefficients for varying inflow conditions

    KAUST Repository

    Ghommem, Mehdi

    2013-01-01

    We develop a low-dimensional tool to predict the effects of unsteadiness in the inflow on force coefficients acting on a circular cylinder using proper orthogonal decomposition (POD) modes from steady flow simulations. The approach is based on combining POD and linear stochastic estimator (LSE) techniques. We use POD to derive a reduced-order model (ROM) to reconstruct the velocity field. To overcome the difficulty of developing a ROM using Poisson\\'s equation, we relate the pressure field to the velocity field through a mapping function based on LSE. The use of this approach to derive force decomposition coefficients (FDCs) under unsteady mean flow from basis functions of the steady flow is illustrated. For both steady and unsteady cases, the final outcome is a representation of the lift and drag coefficients in terms of velocity and pressure temporal coefficients. Such a representation could serve as the basis for implementing control strategies or conducting uncertainty quantification. Copyright © 2013 Inderscience Enterprises Ltd.

  3. A low-dimensional tool for predicting force decomposition coefficients for varying inflow conditions

    KAUST Repository

    Ghommem, Mehdi; Akhtar, Imran; Hajj, M. R.

    2013-01-01

    We develop a low-dimensional tool to predict the effects of unsteadiness in the inflow on force coefficients acting on a circular cylinder using proper orthogonal decomposition (POD) modes from steady flow simulations. The approach is based on combining POD and linear stochastic estimator (LSE) techniques. We use POD to derive a reduced-order model (ROM) to reconstruct the velocity field. To overcome the difficulty of developing a ROM using Poisson's equation, we relate the pressure field to the velocity field through a mapping function based on LSE. The use of this approach to derive force decomposition coefficients (FDCs) under unsteady mean flow from basis functions of the steady flow is illustrated. For both steady and unsteady cases, the final outcome is a representation of the lift and drag coefficients in terms of velocity and pressure temporal coefficients. Such a representation could serve as the basis for implementing control strategies or conducting uncertainty quantification. Copyright © 2013 Inderscience Enterprises Ltd.

  4. Domain decomposition methods for the neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2010-01-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)

  5. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  6. Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Yi [Georgia Inst. of Technology, Atlanta, GA (United States)

    2014-11-24

    DOE-GTRC-05596 11/24/2104 Collaborative Research: Process-Resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate PI: Dr. Yi Deng (PI) School of Earth and Atmospheric Sciences Georgia Institute of Technology 404-385-1821, yi.deng@eas.gatech.edu El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The projection of future changes in the ENSO and AM variability, however, remains highly uncertain with the state-of-the-science climate models. This project conducted a process-resolving, quantitative evaluations of the ENSO and AM variability in the modern reanalysis observations and in climate model simulations. The goal is to identify and understand the sources of uncertainty and biases in models’ representation of ENSO and AM variability. Using a feedback analysis method originally formulated by one of the collaborative PIs, we partitioned the 3D atmospheric temperature anomalies and surface temperature anomalies associated with ENSO and AM variability into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. In the past 4 years, the research conducted at Georgia Tech under the support of this project has led to 15 peer-reviewed publications and 9 conference/workshop presentations. Two graduate students and one postdoctoral fellow also received research training through participating the project activities. This final technical report summarizes key scientific discoveries we made and provides also a list of all publications and conference presentations resulted from research activities at Georgia Tech. The main findings include

  7. On the unsteady wake dynamics behind a circular disk using fully 3D proper orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jianzhi; Liu, Minghou; Gu, Hailin; Yao, Mengyun [Department of Thermal Science and Energy Engineering, University of Science and Technology of China, Hefei, Anhui 230027 (China); Wu, Guang, E-mail: mhliu@ustc.edu.cn [Technical Services Engineer, ANSYS, Inc (United States)

    2017-02-15

    In the present work, the wakes behind a circular disk at various transitional regimes are numerically explored using fully 3D proper orthogonal decomposition (POD). The Reynolds numbers considered in this study (Re = 152, 170, 300 and 3000) cover four transitional states, i.e. the reflectional-symmetry-breaking (RSB) mode, the standing wave (SW) mode, a weakly chaotic state, and a higher-Reynolds-number state. Through analysis of the spatial POD modes at different wake states, it is found that a planar-symmetric vortex shedding mode characterized by the first mode pair is persistent in all the states. When the wake develops into a weakly chaotic state, a new vortex shedding mode characterized by the second mode pair begins to appear and completely forms at the higher-Reynolds-number state of Re = 3000, i.e. planar-symmetry-breaking vortex shedding mode. On the other hand, the coherent structure at Re = 3000 extracted from the first two POD modes shows a good resemblance to the wake configuration in the SW mode, while the coherent structure reconstructed from the first four POD modes shows a good resemblance to the wake configuration in the RSB mode. The present results indicate that the dynamics or flow instabilities observed at transitional RSB and SW modes are still preserved in a higher-Reynolds-number regime. (paper)

  8. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  9. Automated mode shape estimation in agent-based wireless sensor networks

    Science.gov (United States)

    Zimmerman, Andrew T.; Lynch, Jerome P.

    2010-04-01

    Recent advances in wireless sensing technology have made it possible to deploy dense networks of sensing transducers within large structural systems. Because these networks leverage the embedded computing power and agent-based abilities integral to many wireless sensing devices, it is possible to analyze sensor data autonomously and in-network. In this study, market-based techniques are used to autonomously estimate mode shapes within a network of agent-based wireless sensors. Specifically, recent work in both decentralized Frequency Domain Decomposition and market-based resource allocation is leveraged to create a mode shape estimation algorithm derived from free-market principles. This algorithm allows an agent-based wireless sensor network to autonomously shift emphasis between improving mode shape accuracy and limiting the consumption of certain scarce network resources: processing time, storage capacity, and power consumption. The developed algorithm is validated by successfully estimating mode shapes using a network of wireless sensor prototypes deployed on the mezzanine balcony of Hill Auditorium, located on the University of Michigan campus.

  10. Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection

    Science.gov (United States)

    Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing

    2016-04-01

    Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local

  11. The evolution of transmission mode

    Science.gov (United States)

    Forbes, Mark R.; Hauffe, Heidi C.; Kallio, Eva R.; Okamura, Beth; Sait, Steven M.

    2017-01-01

    This article reviews research on the evolutionary mechanisms leading to different transmission modes. Such modes are often under genetic control of the host or the pathogen, and often in conflict with each other via trade-offs. Transmission modes may vary among pathogen strains and among host populations. Evolutionary changes in transmission mode have been inferred through experimental and phylogenetic studies, including changes in transmission associated with host shifts and with evolution of the unusually complex life cycles of many parasites. Understanding the forces that determine the evolution of particular transmission modes presents a fascinating medley of problems for which there is a lack of good data and often a lack of conceptual understanding or appropriate methodologies. Our best information comes from studies that have been focused on the vertical versus horizontal transmission dichotomy. With other kinds of transitions, theoretical approaches combining epidemiology and population genetics are providing guidelines for determining when and how rapidly new transmission modes may evolve, but these are still in need of empirical investigation and application to particular cases. Obtaining such knowledge is a matter of urgency in relation to extant disease threats. This article is part of the themed issue ‘Opening the black box: re-examining the ecology and evolution of parasite transmission’. PMID:28289251

  12. Audit mode change, corporate governance

    Directory of Open Access Journals (Sweden)

    Limei Cao

    2015-12-01

    Full Text Available This study investigates changes in audit strategy in China following the introduction of risk-based auditing standards rather than an internal control-based audit mode. Specifically, we examine whether auditors are implementing the risk-based audit mode to evaluate corporate governance before distributing audit resources. The results show that under the internal control-based audit mode, the relationship between audit effort and corporate governance was weak. However, implementation of the risk-based mode required by the new auditing standards has significantly enhanced the relationship between audit effort and corporate governance. Since the change in audit mode, the Big Ten have demonstrated a significantly better grasp of governance risk and allocated their audit effort accordingly, relative to smaller firms. The empirical evidence indicates that auditors have adjusted their audit strategy to meet the regulations, risk-based auditing is being achieved to a degree, reasonable and effective corporate governance helps to optimize audit resource allocation, and smaller auditing firms in particular should urgently strengthen their risk-based auditing capability. Overall, our findings imply that the mandatory switch to risk-based auditing has optimized audit effort in China.

  13. Multiple calibration decomposition analysis: Energy use and carbon dioxide emissions in the Japanese economy, 1970-1995

    International Nuclear Information System (INIS)

    Okushima, Shinichiro; Tamura, Makoto

    2007-01-01

    The purpose of this paper is to present a new approach to evaluating structural change of the economy in a multisector general equilibrium framework. The multiple calibration technique is applied to an ex post decomposition analysis of structural change between periods, enabling the distinction between price substitution and technological change to be made for each sector. This approach has the advantage of sounder microtheoretical underpinnings when compared with conventional decomposition methods. The proposed technique is empirically applied to changes in energy use and carbon dioxide (CO 2 ) emissions in the Japanese economy from 1970 to 1995. The results show that technological change is of great importance for curtailing energy use and CO 2 emissions in Japan. Total CO 2 emissions increased during this period primarily because of economic growth, which is represented by final demand effects. On the other hand, the effects such as technological change for labor or energy mitigated the increase in CO 2 emissions

  14. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  15. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    Science.gov (United States)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016

  16. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  17. Analyzing nonstationary financial time series via hilbert-huang transform (HHT)

    Science.gov (United States)

    Huang, Norden E. (Inventor)

    2008-01-01

    An apparatus, computer program product and method of analyzing non-stationary time varying phenomena. A representation of a non-stationary time varying phenomenon is recursively sifted using Empirical Mode Decomposition (EMD) to extract intrinsic mode functions (IMFs). The representation is filtered to extract intrinsic trends by combining a number of IMFs. The intrinsic trend is inherent in the data and identifies an IMF indicating the variability of the phenomena. The trend also may be used to detrend the data.

  18. Component mode synthesis methods applied to 3D heterogeneous core calculations, using the mixed dual finite element solver MINOS

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P.; Baudron, A. M.; Lautard, J. J. [Commissariat a l' Energie Atomique, DEN/DANS/DM2S/SERMA/LENR, CEA Saclay, 91191 Gif sur Yvette (France)

    2006-07-01

    This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  19. Component mode synthesis methods applied to 3D heterogeneous core calculations, using the mixed dual finite element solver MINOS

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2006-01-01

    This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  20. Time Series Decomposition into Oscillation Components and Phase Estimation.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  1. Default-mode network and deep gray-matter analysis in neuromyelitis optica patients.

    Science.gov (United States)

    Rueda-Lopes, Fernanda C; Pessôa, Fernanda M C; Tukamoto, Gustavo; Malfetano, Fabíola Rachid; Scherpenhuijzen, Simone Batista; Alves-Leon, Soniza; Gasparetto, Emerson L

    2018-02-20

    The aim of our study was to detect functional changes in default-mode network of neuromyelitis optica (NMO) patients using resting-state functional magnetic resonance images and the evaluation of subcortical gray-matter structures volumes. NMO patients (n=28) and controls patients (n=19) were enrolled. We used the integrated registration and segmentation tool, part of FMRIB's Software Library (FSL) to segment subcortical structures including the thalamus, caudate nucleus, putamen, hippocampus and amygdalae. Resting-state functional magnetic resonance images were post-processed using the Multivariate Exploratory Linear Optimized Decomposition into Independent Components, also part of FSL. Average Z-values extracted from the default-mode network were compared between patients and controls using t-tests (P values default-mode network of patients compared to controls, notably in the precuneus and right hippocampus (corrected Pdefault-mode network. The hyperactivity of certain default-mode network areas may reflect cortical compensation for subtle structural damage in NMO patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. Changes in the Amplitude and Phase of the Annual Cycle: quantifying from surface wind series in China

    Science.gov (United States)

    Feng, Tao

    2013-04-01

    Climate change is not only reflected in the changes in annual means of climate variables but also in the changes in their annual cycles (seasonality), especially in the regions outside the tropics. Changes in the timing of seasons, especially the wind season, have gained much attention worldwide in recent decade or so. We introduce long-range correlated surrogate data to Ensemble Empirical Mode Decomposition method, which represent the statistic characteristics of data better than white noise. The new method we named Ensemble Empirical Mode Decomposition with Long-range Correlated noise (EEMD-LRC) and applied to 600 station wind speed records. This new method is applied to investigate the trend in the amplitude of the annual cycle of China's daily mean surface wind speed for the period 1971-2005. The amplitude of seasonal variation decrease significantly in the past half century over China, which can be well explained by Annual Cycle component from EEMD-LRC. Furthermore, the phase change of annual cycle lead to strongly shorten of wind season in spring, and corresponding with strong windy day frequency change over Northern China.

  3. Decomposition and forecasting analysis of China's energy efficiency: An application of three-dimensional decomposition and small-sample hybrid models

    International Nuclear Information System (INIS)

    Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei

    2015-01-01

    The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted

  4. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  5. Multivariate EMD-Based Modeling and Forecasting of Crude Oil Price

    Directory of Open Access Journals (Sweden)

    Kaijian He

    2016-04-01

    Full Text Available Recent empirical studies reveal evidence of the co-existence of heterogeneous data characteristics distinguishable by time scale in the movement crude oil prices. In this paper we propose a new multivariate Empirical Mode Decomposition (EMD-based model to take advantage of these heterogeneous characteristics of the price movement and model them in the crude oil markets. Empirical studies in benchmark crude oil markets confirm that more diverse heterogeneous data characteristics can be revealed and modeled in the projected time delayed domain. The proposed model demonstrates the superior performance compared to the benchmark models.

  6. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  7. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  8. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography.

    Science.gov (United States)

    Leblond, Frederic; Tichauer, Kenneth M; Pogue, Brian W

    2010-11-29

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions.

  9. Identification of flow structures in fully developed canonical and wavy channels by means of modal decomposition techniques

    Science.gov (United States)

    Ghebali, Sacha; Garicano-Mena, Jesús; Ferrer, Esteban; Valero, Eusebio

    2018-04-01

    A Dynamic Mode Decomposition (DMD) of Direct Numerical Simulations (DNS) of fully developed channel flows is undertaken in order to study the main differences in flow features between a plane-channel flow and a passively “controlled” flow wherein the mean friction was reduced relative to the baseline by modifying the geometry in order to generate a streamwise-periodic spanwise pressure gradient, as is the case for an oblique wavy wall. The present analysis reports POD and DMD modes for the plane channel, jointly with the application of a sparsity-promoting method, as well as a reconstruction of the Reynolds shear stress with the dynamic modes. Additionally, a dynamic link between the streamwise velocity fluctuations and the friction on the wall is sought by means of a composite approach both in the plane and wavy cases. One of the DMD modes associated with the wavy-wall friction exhibits a meandering motion which was hardly identifiable on the instantaneous friction fluctuations.

  10. Three-dimensional decomposition models for carbon productivity

    International Nuclear Information System (INIS)

    Meng, Ming; Niu, Dongxiao

    2012-01-01

    This paper presents decomposition models for the change in carbon productivity, which is considered a key indicator that reflects the contributions to the control of greenhouse gases. Carbon productivity differential was used to indicate the beginning of decomposition. After integrating the differential equation and designing the Log Mean Divisia Index equations, a three-dimensional absolute decomposition model for carbon productivity was derived. Using this model, the absolute change of carbon productivity was decomposed into a summation of the absolute quantitative influences of each industrial sector, for each influence factor (technological innovation and industrial structure adjustment) in each year. Furthermore, the relative decomposition model was built using a similar process. Finally, these models were applied to demonstrate the decomposition process in China. The decomposition results reveal several important conclusions: (a) technological innovation plays a far more important role than industrial structure adjustment; (b) industry and export trade exhibit great influence; (c) assigning the responsibility for CO 2 emission control to local governments, optimizing the structure of exports, and eliminating backward industrial capacity are highly essential to further increase China's carbon productivity. -- Highlights: ► Using the change of carbon productivity to measure a country's contribution. ► Absolute and relative decomposition models for carbon productivity are built. ► The change is decomposed to the quantitative influence of three-dimension. ► Decomposition results can be used for improving a country's carbon productivity.

  11. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  12. Entry Mode and Performance of Nordic Firms

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    including the proposed moderating effect, on average, yield higher post-entry performance. This study sheds light on inconsistent results found in previous research investigating the impact of international experience and has practical implications for managerial decision-making.......This study investigates whether the relationship between mode of international market entry and non-location bound international experience is weaker for firms that are large or have a high foreign to total sales ratio, labeled multinational experience. Empirical evidence based on 250 foreign...... market entries made by Norwegian, Danish and Swedish firms suggests that the association between equity mode choice and non-location bound international experience diminishes in the presence of higher levels of multinational experience. Furthermore, firms whose entry mode choice is predicted by the model...

  13. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  14. Adjuvanted vaccines: Aspects of immunosafety and modes of action

    NARCIS (Netherlands)

    Aalst, Susan van

    2017-01-01

    New developments in vaccine design shift towards safe, though sometimes less immunogenic, subunit and synthetic antigens. Therefore, the majority of current vaccines require adjuvants to increase immunogenicity. Most adjuvants available were developed empirically and their mode of action is only

  15. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  16. THE EFFECT OF DECOMPOSITION METHOD AS DATA PREPROCESSING ON NEURAL NETWORKS MODEL FOR FORECASTING TREND AND SEASONAL TIME SERIES

    Directory of Open Access Journals (Sweden)

    Subanar Subanar

    2006-01-01

    Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.

  17. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    Science.gov (United States)

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  18. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    Science.gov (United States)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  19. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    Science.gov (United States)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  20. Modes of winter precipitation variability in the North Atlantic

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik; Saenz, J.; Fernandez, J.; Zubillaga, J. [Bilbao Univ. (Spain)

    2001-07-01

    The modes of variability of winter precipitation in the North Atlantic sector are identified by Empirical Orthogonal Functions Analysis in the NCEP/NCAR global reanalysis data sets. These modes are also present in a gridded precipitation data set over the Western Europe. The large-scale fields of atmospheric seasonal mean circulation, baroclinic activity, evaporation and humidity transport that are connected to the rainfall modes have been also analyzed in order to investigate the physical mechanisms that are causally linked to the rainfall modes. The results indicate that the leading rainfall mode is associated to the North Atlantic oscillation and represents a meridional redistribution of precipitation in the North Atlantic through displacements of the storm tracks. The second mode is related to evaporation anomalies in the Eastern Atlantic that precipitate almost entirely in the Western Atlantic. The third mode seems to be associated to meridional transport of water vapor from the Tropical Atlantic. (orig.)

  1. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  2. Theory of a beam-induced electromagnetic mode in a magnetized plasma

    International Nuclear Information System (INIS)

    Baumgaertel, K.; Sauer, K.

    1985-01-01

    The theory of a recently discovered plasma wave mode is presented. In a non-Maxwellian high-beta plasma a new electromagnetic mode was detected containing a group of energetic field-aligned electrons. The theory uses the standard method for derivation of the dispersion relation, allowing non-Maxwellian electron distributions and right-hand polarization. The theoretical dispersion relation is compared with the empirical data. This comparison confirmes the existence of a right-hand circularly polarized mode propagating parallel to the external magnetic field. (D.Gy.)

  3. Multi-Mode Cavity Accelerator Structure

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yong [Yale Univ., New Haven, CT (United States); Hirshfield, Jay Leonard [Omega-P R& D, Inc., New Haven, CT (United States)

    2016-11-10

    This project aimed to develop a prototype for a novel accelerator structure comprising coupled cavities that are tuned to support modes with harmonically-related eigenfrequencies, with the goal of reaching an acceleration gradient >200 MeV/m and a breakdown rate <10-7/pulse/meter. Phase I involved computations, design, and preliminary engineering of a prototype multi-harmonic cavity accelerator structure; plus tests of a bimodal cavity. A computational procedure was used to design an optimized profile for a bimodal cavity with high shunt impedance and low surface fields to maximize the reduction in temperature rise ΔT. This cavity supports the TM010 mode and its 2nd harmonic TM011 mode. Its fundamental frequency is at 12 GHz, to benchmark against the empirical criteria proposed within the worldwide High Gradient collaboration for X-band copper structures; namely, a surface electric field Esurmax< 260 MV/m and pulsed surface heating ΔTmax< 56 °K. With optimized geometry, amplitude and relative phase of the two modes, reductions are found in surface pulsed heating, modified Poynting vector, and total RF power—as compared with operation at the same acceleration gradient using only the fundamental mode.

  4. Multi-Mode Cavity Accelerator Structure

    International Nuclear Information System (INIS)

    Jiang, Yong; Hirshfield, Jay Leonard

    2016-01-01

    This project aimed to develop a prototype for a novel accelerator structure comprising coupled cavities that are tuned to support modes with harmonically-related eigenfrequencies, with the goal of reaching an acceleration gradient >200 MeV/m and a breakdown rate <10"-"7/pulse/meter. Phase I involved computations, design, and preliminary engineering of a prototype multi-harmonic cavity accelerator structure; plus tests of a bimodal cavity. A computational procedure was used to design an optimized profile for a bimodal cavity with high shunt impedance and low surface fields to maximize the reduction in temperature rise Δ T. This cavity supports the TM010 mode and its 2nd harmonic TM011 mode. Its fundamental frequency is at 12 GHz, to benchmark against the empirical criteria proposed within the worldwide High Gradient collaboration for X-band copper structures; namely, a surface electric field E_s_u_r"m"a"x< 260 MV/m and pulsed surface heating Δ T"m"a"x< 56 °K. With optimized geometry, amplitude and relative phase of the two modes, reductions are found in surface pulsed heating, modified Poynting vector, and total RF power - as compared with operation at the same acceleration gradient using only the fundamental mode.

  5. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem; Methodes de decomposition de domaine pour la formulation mixte duale du probleme critique de la diffusion des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P

    2007-12-15

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  6. Fundamental and higher two-dimensional resonance modes of an Alpine valley

    Science.gov (United States)

    Ermert, Laura; Poggi, Valerio; Burjánek, Jan; Fäh, Donat

    2014-08-01

    We investigated the sequence of 2-D resonance modes of the sediment fill of Rhône Valley, Southern Swiss Alps, a strongly overdeepened, glacially carved basin with a sediment fill reaching a thickness of up to 900 m. From synchronous array recordings of ambient vibrations at six locations between Martigny and Sion we were able to identify several resonance modes, in particular, previously unmeasured higher modes. Data processing was performed with frequency domain decomposition of the cross-spectral density matrices of the recordings and with time-frequency dependent polarization analysis. 2-D finite element modal analysis was performed to support the interpretation of processing results and to investigate mode shapes at depth. In addition, several models of realistic bedrock geometries and velocity structures could be used to qualitatively assess the sensitivity of mode shape and particle motion dip angle to subsurface properties. The variability of modal characteristics due to subsurface properties makes an interpretation of the modes purely from surface observations challenging. We conclude that while a wealth of information on subsurface structure is contained in the modal characteristics, a careful strategy for their interpretation is needed to retrieve this information.

  7. A general solution strategy of modified power method for higher mode solutions

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the new strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.

  8. An investigation on thermal decomposition of DNTF-CMDB propellants

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)

    2007-12-15

    The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  9. Relationship between mode choice and the location of supermarkets – empirical analysis in Austria

    Directory of Open Access Journals (Sweden)

    Roman KLEMENTSCHITZ

    2014-03-01

    Full Text Available Main goal of the study work is to gain data about shopping and mobility behaviour at small local supermarkets with sales floor space less than 1.000 m2. Four location types have been defined and discussed; rural  peripheral location, rural  central location, urban – central location and urban – peripheral location. 200 shoppers each location were interviewed at the exit of the supermarket, which means a total of 800 interviews were carried out during all day times and working days of the supermarket. As expected, the mode choice is strongly dependent on the location of the supermarket. In car oriented settlements, which can be found at rural peripheral locations, nearly all shoppers accessed the supermarket with their cars. If weighting the expenditure per visit with the frequency of visits, the average expenditure per month and mode can be derived. The average purchase per month between the modes is more or less balanced. A difference in behaviour lies in the fact that cyclists and pedestrians go shopping more frequently but are spending less per visit. Additionally, the results of this study are indicating the existence of a potential mode shift, especially if there is better land use planning for supermarket locations. Furthermore, considering the given situation and a given threshold of less than 5 kilograms of weight of the goods purchased, more than fifty percent of all shoppers could use non motorised modes with insignificant loss of travel quality. Combined with short travel distances to the next shop (the average distance is 4.9 km, a change to alternative means of transport would be relatively easy for a significant number of shoppers.

  10. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    Science.gov (United States)

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  11. Thermal decomposition process of silver behenate

    International Nuclear Information System (INIS)

    Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang

    2006-01-01

    The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles

  12. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    Science.gov (United States)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  13. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  14. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.

    2007-12-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  15. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  16. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  17. Complexity Reduction of Multiphase Flows in Heterogeneous Porous Media

    KAUST Repository

    Ghommem, Mehdi

    2015-04-22

    In this paper, we apply mode decomposition and interpolatory projection methods to speed up simulations of two-phase flows in heterogeneous porous media. We propose intrusive and nonintrusive model-reduction approaches that enable a significant reduction in the size of the subsurface flow problem while capturing the behavior of the fully resolved solutions. In one approach, we use the dynamic mode decomposition. This approach does not require any modification of the reservoir simulation code but rather post-processes a set of global snapshots to identify the dynamically relevant structures associated with the flow behavior. In the second approach, we project the governing equations of the velocity and the pressure fields on the subspace spanned by their proper-orthogonal-decomposition modes. Furthermore, we use the discrete empirical interpolation method to approximate the mobility-related term in the global-system assembly and then reduce the online computational cost and make it independent of the fine grid. To show the effectiveness and usefulness of the aforementioned approaches, we consider the SPE-10 benchmark permeability field, and present a numerical example in two-phase flow. One can efficiently use the proposed model-reduction methods in the context of uncertainty quantification and production optimization.

  18. Spontaneous Lorentz and diffeomorphism violation, massive modes, and gravity

    International Nuclear Information System (INIS)

    Bluhm, Robert; Fung Shuhong; Kostelecky, V. Alan

    2008-01-01

    Theories with spontaneous local Lorentz and diffeomorphism violation contain massless Nambu-Goldstone modes, which arise as field excitations in the minimum of the symmetry-breaking potential. If the shape of the potential also allows excitations above the minimum, then an alternative gravitational Higgs mechanism can occur in which massive modes involving the metric appear. The origin and basic properties of the massive modes are addressed in the general context involving an arbitrary tensor vacuum value. Special attention is given to the case of bumblebee models, which are gravitationally coupled vector theories with spontaneous local Lorentz and diffeomorphism violation. Mode expansions are presented in both local and spacetime frames, revealing the Nambu-Goldstone and massive modes via decomposition of the metric and bumblebee fields, and the associated symmetry properties and gauge fixing are discussed. The class of bumblebee models with kinetic terms of the Maxwell form is used as a focus for more detailed study. The nature of the associated conservation laws and the interpretation as a candidate alternative to Einstein-Maxwell theory are investigated. Explicit examples involving smooth and Lagrange-multiplier potentials are studied to illustrate features of the massive modes, including their origin, nature, dispersion laws, and effects on gravitational interactions. In the weak static limit, the massive mode and Lagrange-multiplier fields are found to modify the Newton and Coulomb potentials. The nature and implications of these modifications are examined.

  19. A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).

    Science.gov (United States)

    Chen, Yuehua; Jin, Guoyong; Liu, Zhigang

    2017-05-01

    This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.

  20. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  1. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  2. Quantitative material decomposition using spectral computed tomography with an energy-resolved photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2014-01-01

    Dual-energy computed tomography (CT) techniques have been used to decompose materials and characterize tissues according to their physical and chemical compositions. However, these techniques are hampered by the limitations of conventional x-ray detectors operated in charge integrating mode. Energy-resolved photon-counting detectors provide spectral information from polychromatic x-rays using multiple energy thresholds. These detectors allow simultaneous acquisition of data in different energy ranges without spectral overlap, resulting in more efficient material decomposition and quantification for dual-energy CT. In this study, a pre-reconstruction dual-energy CT technique based on volume conservation was proposed for three-material decomposition. The technique was combined with iterative reconstruction algorithms by using a ray-driven projector in order to improve the quality of decomposition images and reduce radiation dose. A spectral CT system equipped with a CZT-based photon-counting detector was used to implement the proposed dual-energy CT technique. We obtained dual-energy images of calibration and three-material phantoms consisting of low atomic number materials from the optimal energy bins determined by Monte Carlo simulations. The material decomposition process was accomplished by both the proposed and post-reconstruction dual-energy CT techniques. Linear regression and normalized root-mean-square error (NRMSE) analyses were performed to evaluate the quantitative accuracy of decomposition images. The calibration accuracy of the proposed dual-energy CT technique was higher than that of the post-reconstruction dual-energy CT technique, with fitted slopes of 0.97–1.01 and NRMSEs of 0.20–4.50% for all basis materials. In the three-material phantom study, the proposed dual-energy CT technique decreased the NRMSEs of measured volume fractions by factors of 0.17–0.28 compared to the post-reconstruction dual-energy CT technique. It was concluded that the

  3. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  4. Explaining Student Interaction and Satisfaction: An Empirical Investigation of Delivery Mode Influence

    Science.gov (United States)

    Johnson, Zachary S.; Cascio, Robert; Massiah, Carolyn A.

    2014-01-01

    How interpersonal interactions within a course affect student satisfaction differently between face-to-face and online modes is an important research question to answer with confidence. Using students from a marketing course delivered face-to-face and online concurrently, our first study demonstrates that student-to-professor and…

  5. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    Science.gov (United States)

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  6. Microbiological decomposition of bagasse after radiation pasteurization

    International Nuclear Information System (INIS)

    Ito, Hitoshi; Ishigaki, Isao

    1987-01-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)

  7. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  8. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  9. Self-decomposition of radiochemicals. Principles, control, observations and effects

    International Nuclear Information System (INIS)

    Evans, E.A.

    1976-01-01

    The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)

  10. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  11. GATS Mode 4 Negotiation and Policy Options

    Directory of Open Access Journals (Sweden)

    Kil-Sang Yoo

    2004-06-01

    Full Text Available This study reviews the characteristics and issues of GATS Mode 4 and guesses the effects of Mode 4 liberalization on Korean economy and labor market to suggest policy options to Korea. Mode 4 negotiation started from the trade perspective, however, since Mode 4 involves international labor migration, it also has migration perspective. Thus developed countries, that have competitiveness in service sector, are interested in free movement of skilled workers such as intra-company transferees and business visitors. On the other hand, developing countries, that have little competitiveness in service sector, are interested in free movement of low-skilled workers. Empirical studies predict that the benefits of Mode 4 liberalization will be focused on developed countries rather than developing countries. The latter may suffer from brain drain and reduction of labor supply. Nevertheless developed countries are reluctant to Mode 4 negotiation because they can utilize skilled workers from developing countries by use of their own temporary visa programs. They are interested in Mode 4 related with Mode 3 in order to ease direct investment and movement of natural persons to developing countries. Regardless of the direction of a single undertaking of Mode 4 negotiation, the net effects of Mode 4 liberalization on Korean economy and labor market may be negative. The Korean initial offer on Mode 4 is the same as the UR offer. Since Korean position on Mode 4 is most defensive, it is hard to expect that Korean position will be accepted as the single undertaking of Mode 4 negotiation. Thus Korea has to prepare strategic package measures to minimize the costs of Mode 4 liberalization and improve competitiveness of service sector.

  12. From the doctor's workshop to the iron cage? Evolving modes of physician control in US health systems.

    Science.gov (United States)

    Kitchener, Martin; Caronna, Carol A; Shortell, Stephen M

    2005-03-01

    As national health systems pursue the common goals of containing expenditure growth and improving quality, many have sought to replace autonomous modes (systems) of physician control that rely on initial professional training and subsequent peer review. A common approach has involved extending bureaucratic modes of physician control that employ techniques such as hierarchical coordination and salaried positions. This paper applies concepts from studies of professional work to frame an empirical analysis of emergent bureaucratic modes of physician control in US hospital-based systems. Conceptually, we draw from recent studies to update Scott's (Health Services Res. 17(3) (1982) 213) typology to specify three bureaucratic modes of physician control: heteronomous, conjoint, and custodial. Empirically, we use case study evidence from eight US hospital-based systems to illustrate the heterogeneity of bureaucratic modes of physician control that span each of the ideal types. The findings indicate that some influential analysts perpetuate a caricature of bureaucratic organization which underplays its capacity to provide multiple modes of physician control that maintain professional autonomy over the content of work, and present opportunities for aligning practice with social goals.

  13. Complex mode indication function and its applications to spatial domain parameter estimation

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  14. Thermal decomposition of ammonium uranate; X-ray study

    International Nuclear Information System (INIS)

    El-Fekey, S.A.; Rofail, N.H.; Khilla, M.A.

    1984-01-01

    Ammonium uranate was precipitated from a nuclear-pure uranyl nitrate solution using gaseous ammonia. Thermal decomposition of the obtained uranate, at different calcining temperatures, resulted in the formation of amorphous (A-)UO 3 , β-UO 3 , UOsub(2.9), U 3 O 8 (H) and U 3 O 8 (O). The influence of ammonia content, occluded nitrate ions and rate of heating, on the formation of these phases, was studied using X-ray powder diffraction analysis. The results indicated that ammonium uranate UO 2 (OH)sub(2-x)(ONH 4 )x . YH 2 O is a continuous non-stoichiometric system is a continuous non-stoichiometric system with no intermediate stoichiometric compounds and its composition varies according to mode of preparation. The results indicated also that the rate of heating and formation of hydrates are important factors for both UOsub(2.9) and U 3 O 8 (O) formation. (orig.)

  15. A novel hybrid ensemble learning paradigm for tourism forecasting

    Science.gov (United States)

    Shabri, Ani

    2015-02-01

    In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.

  16. Distinguishing zero-group-velocity modes in photonic crystals

    International Nuclear Information System (INIS)

    Ghebrebrhan, M.; Ibanescu, M.; Johnson, Steven G.; Soljacic, M.; Joannopoulos, J. D.

    2007-01-01

    We examine differences between various zero-group-velocity modes in photonic crystals, including those that arise from Bragg diffraction, anticrossings, and band repulsion. Zero-group velocity occurs at points where the group velocity changes sign, and therefore is conceptually related to 'left-handed' media, in which the group velocity is opposite to the phase velocity. We consider this relationship more quantitatively in terms of the Fourier decomposition of the modes, by defining a measure of how much the ''average'' phase velocity is parallel to the group velocity--an anomalous region is one in which they are mostly antiparallel. We find that this quantity can be used to qualitatively distinguish different zero-group-velocity points. In one dimension, such anomalous regions are found never to occur. In higher dimensions, they are exhibited around certain zero-group-velocity points, and lead to unusual enhanced confinement behavior in microcavities

  17. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  18. Peculiarities of formation of zirconium aluminides in hydride cycle mode

    International Nuclear Information System (INIS)

    Muradyan, G.N.

    2016-01-01

    The zirconium aluminides are promising structural materials in aerospace, mechanical engineering, chemical industry, etc. They are promising for manufacturing of heat-resistant wires, that will improve the reliability and efficiency of electrical networks. In the present work, the results of study of zirconium aluminides formation in the Hydride Cycle (HC) mode, developed in the Laboratory of high-temperature synthesis of the Institute of Chemical Physics of NAS RA, are described. The formation of zirconium aluminides in HC proceeded according to the reaction xZrH_2+(1-x)Al → alloy Zr_xAl(1-x)+H_2↑. The samples were certified using: chemical analysis to determine the content of hydrogen (pyrolysis method); differential thermal analysis (DTA, derivatograph Q-1500, T_heating = 1000°C, rate 20°C/min); X-ray analysis (XRD, diffractometer DRON-0.5). The influences of the ratio of powders ZrH_2/Al in the reaction mixture, compacting pressure, temperature and heating velocity on the characteristics of the synthesized aluminides were determined. In HC, the solid solutions of Al in Zr, single phase ZrAl_2 and ZrAl_3 aluminides and Zr_3AlH_4.49 hydride were synthesized. Formation of aluminides in HC mode took place by the solid-phase mechanism, without melting of aluminum. During processing, the heating of the initial charge up to 540°C resulted in the decomposition of zirconium hydride (ZrH_2) to HCC ZrH_1.5, that interacted with aluminum at 630°C forming FCC alumohydride of zirconium. Further increase of the temperature up to 800°C led to complete decomposition of the formed alumohydride of zirconium. The final formation of the zirconium aluminide occurred at 1000-1100°C in the end of HC process. Conclusion: in the synthesis of zirconium aluminides, the HC mode has several significant advantages over the conventional modes: lower operating temperatures (1000°C instead of 1800°C); shorter duration (1.5-2 hours instead of tens of hours); the availability of

  19. Kinetics of thermal decomposition of aluminium hydride: I-non-isothermal decomposition under vacuum and in inert atmosphere (argon)

    International Nuclear Information System (INIS)

    Ismail, I.M.K.; Hawkins, T.

    2005-01-01

    Recently, interest in aluminium hydride (alane) as a rocket propulsion ingredient has been renewed due to improvements in its manufacturing process and an increase in thermal stability. When alane is added to solid propellant formulations, rocket performance is enhanced and the specific impulse increases. Preliminary work was performed at AFRL on the characterization and evaluation of two alane samples. Decomposition kinetics were determined from gravimetric TGA data and volumetric vacuum thermal stability (VTS) results. Chemical analysis showed the samples had 88.30% (by weight) aluminium and 9.96% hydrogen. The average density, as measured by helium pycnometery, was 1.486 g/cc. Scanning electron microscopy showed that the particles were mostly composed of sharp edged crystallographic polyhedral such as simple cubes, cubic octahedrons and hexagonal prisms. Thermogravimetric analysis was utilized to investigate the decomposition kinetics of alane in argon atmosphere and to shed light on the mechanism of alane decomposition. Two kinetic models were successfully developed and used to propose a mechanism for the complete decomposition of alane and to predict its shelf-life during storage. Alane decomposes in two steps. The slowest (rate-determining) step is solely controlled by solid state nucleation of aluminium crystals; the fastest step is due to growth of the crystals. Thus, during decomposition, hydrogen gas is liberated and the initial polyhedral AlH 3 crystals yield a final mix of amorphous aluminium and aluminium crystals. After establishing the kinetic model, prediction calculations indicated that alane can be stored in inert atmosphere at temperatures below 10 deg. C for long periods of time (e.g., 15 years) without significant decomposition. After 15 years of storage, the kinetic model predicts ∼0.1% decomposition, but storage at higher temperatures (e.g. 30 deg. C) is not recommended

  20. An integrated condition-monitoring method for a milling process using reduced decomposition features

    International Nuclear Information System (INIS)

    Liu, Jie; Wu, Bo; Hu, Youmin; Wang, Yan

    2017-01-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification. (paper)

  1. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  2. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  3. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  4. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  5. Wavelet Entropy-Based Traction Inverter Open Switch Fault Diagnosis in High-Speed Railways

    Directory of Open Access Journals (Sweden)

    Keting Hu

    2016-03-01

    Full Text Available In this paper, a diagnosis plan is proposed to settle the detection and isolation problem of open switch faults in high-speed railway traction system traction inverters. Five entropy forms are discussed and compared with the traditional fault detection methods, namely, discrete wavelet transform and discrete wavelet packet transform. The traditional fault detection methods cannot efficiently detect the open switch faults in traction inverters because of the low resolution or the sudden change of the current. The performances of Wavelet Packet Energy Shannon Entropy (WPESE, Wavelet Packet Energy Tsallis Entropy (WPETE with different non-extensive parameters, Wavelet Packet Energy Shannon Entropy with a specific sub-band (WPESE3,6, Empirical Mode Decomposition Shannon Entropy (EMDESE, and Empirical Mode Decomposition Tsallis Entropy (EMDETE with non-extensive parameters in detecting the open switch fault are evaluated by the evaluation parameter. Comparison experiments are carried out to select the best entropy form for the traction inverter open switch fault detection. In addition, the DC component is adopted to isolate the failure Isolated Gate Bipolar Transistor (IGBT. The simulation experiments show that the proposed plan can diagnose single and simultaneous open switch faults correctly and timely.

  6. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  7. Thermal decomposition of uranyl nitrate hexahydrate. Study of intermediate reaction products; Decomposition thermique du nitrate d'uranyle hexahydrate etude des intermediaires de cette decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Chottard, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1970-07-01

    The thermal decomposition of uranyl nitrate hexahydrate has been carried but at constant pressure and constant rate of reaction. The following intermediary products have been shown to exist and isolated: UO{sub 2}(NO{sub 3}){sub 2}.3H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2}. 2H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2}. H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2} and UO{sub 3}. These products, together with the hexahydrate UO{sub 2} (NO{sub 3}){sub 2}.6H{sub 2}O, have been studied by: - X-ray diffraction, using the Debye-Scherrer method.- infra-red spectrography: determination of the type of bonding for the water and the nitrate groups. - nuclear magnetic resonance: study of the mobility of water molecules. The main results concern: - the water molecule bonds in the series of hydrates with 6.3 and 2 H{sub 2}O. - isolation and characterization of uranyl nitrate monohydrate, together with the determination of its molecular structure. - the mobility of the water molecules in the series of the hydrates with 6.3 and 2 H{sub 2}O. An analysis is made of the complementary results given by infra-red spectroscopy and nuclear magnetic resonance; they are interpreted for the whole of the hydrate series. [French] La decomposition thermique du nitrate d'uranyle hexahydrate a ete effectuee en operant a pression et vitesse de decomposition constantes. Les produits intermediaires suivants ont ete mis en evidence et isoles: UO{sub 2}(NO{sub 3}){sub 2}, 3H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2}, 2H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2},H{sub 2}O; UO{sub 2}(NO{sub 3}){sub 2} et UO{sub 3}. Ces composes, ainsi que l'hexahydrate UO{sub 2}(NO{sub 3} ){sub 2}, 6H{sub 2}O ont ete etudies par: - diffraction des rayons X, selon la methode Debye-Scherrer.- spectrographie infra-rouge: determination des modes de liaison de l'eau et des groupements nitrate. - resonance magnetique nucleaire: etude de la mobilite des molecules d'eau. Les principaux resultats portent sur: - les liaisons des molecules d'eau dans la

  8. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  9. Fuel-coolant interactions in a jet contact mode

    International Nuclear Information System (INIS)

    Konishi, K.; Isozaki, M.; Imahori, S.; Kondo, S.; Furutani, A.; Brear, D.J.

    1994-01-01

    Molten fuel-coolant interactions in a jet contact mode was studied with respect to the safety of liquid-metal-cooled fast reactors (LMFRs). From a series of molten Wood's metal (melting point: 79 deg. C, density: -8400 kg/m 3 ) jet-water interaction experiments, several distinct modes of interaction behaviors were observed for various combinations of initial temperature conditions of the two fluids. A semi-empirical model for a minimum film boiling temperature criterion was developed and used to reasonably explain the different interaction modes. It was concluded that energetic jet-water interactions are only possible under relatively narrow initial thermal conditions. Preliminary extrapolation of the present results in an oxide fuel-sodium system suggests that mild interactions with short breakup length and coolable debris formation should be most likely in LMFRs. (author)

  10. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  11. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  12. Rock Fracture Toughness Study Under Mixed Mode I/III Loading

    Science.gov (United States)

    Aliha, M. R. M.; Bahmani, A.

    2017-07-01

    Fracture growth in underground rock structures occurs under complex stress states, which typically include the in- and out-of-plane sliding deformation of jointed rock masses before catastrophic failure. However, the lack of a comprehensive theoretical and experimental fracture toughness study for rocks under contributions of out-of plane deformations (i.e. mode III) is one of the shortcomings of this field. Therefore, in this research the mixed mode I/III fracture toughness of a typical rock material is investigated experimentally by means of a novel cracked disc specimen subjected to bend loading. It was shown that the specimen can provide full combinations of modes I and III and consequently a complete set of mixed mode I/III fracture toughness data were determined for the tested marble rock. By moving from pure mode I towards pure mode III, fracture load was increased; however, the corresponding fracture toughness value became smaller. The obtained experimental fracture toughness results were finally predicted using theoretical and empirical fracture models.

  13. Automatic vibration mode selection and excitation; combining modal filtering with autoresonance

    Science.gov (United States)

    Davis, Solomon; Bucher, Izhak

    2018-02-01

    Autoresonance is a well-known nonlinear feedback method used for automatically exciting a system at its natural frequency. Though highly effective in exciting single degree of freedom systems, in its simplest form it lacks a mechanism for choosing the mode of excitation when more than one is present. In this case a single mode will be automatically excited, but this mode cannot be chosen or changed. In this paper a new method for automatically exciting a general second-order system at any desired natural frequency using Autoresonance is proposed. The article begins by deriving a concise expression for the frequency of the limit cycle induced by an Autoresonance feedback loop enclosed on the system. The expression is based on modal decomposition, and provides valuable insight into the behavior of a system controlled in this way. With this expression, a method for selecting and exciting a desired mode naturally follows by combining Autoresonance with Modal Filtering. By taking various linear combinations of the sensor signals, by orthogonality one can "filter out" all the unwanted modes effectively. The desired mode's natural frequency is then automatically reflected in the limit cycle. In experiment the technique has proven extremely robust, even if the amplitude of the desired mode is significantly smaller than the others and the modal filters are greatly inaccurate.

  14. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  15. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  16. Component mode synthesis methods for 3-D heterogeneous core calculations applied to the mixed-dual finite element solver MINOS

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A.M.; Lautard, J.J.; Van Criekingen, S.

    2007-01-01

    This paper describes a new technique for determining the pin power in heterogeneous three-dimensional calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis (CMS) technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions. In the first one (the CMS method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (factorized CMS method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher-order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher-order angular approximations-particularly easily to an SPN approximation-the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with uranium dioxide and mixed oxide assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  17. Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.

    Science.gov (United States)

    Vera, J Fernando; Macías, Rodrigo

    2017-06-01

    One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

  18. Ghost microscope imaging system from the perspective of coherent-mode representation

    Science.gov (United States)

    Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan

    2018-03-01

    The coherent-mode representation theory of partially coherent fields is firstly used to analyze a two-arm ghost microscope imaging system. It is shown that imaging quality of the generated images depend crucially on the distribution of the decomposition coefficients of the object imaged when the light source is fixed. This theory is also suitable for demonstrating the effects from the distance the object is moved away from the original plane on imaging quality. Our results are verified theoretically and experimentally.

  19. Mode of action of the phenylpyrrole fungicide fenpiclonil in Fusarium sulphureum

    NARCIS (Netherlands)

    Jespers, A.B.K.

    1994-01-01

    In the last few decades, plant disease control has become heavily dependent on fungicides. Most modem fungicides were discovered by random synthesis and empirical optimization of lead structures. In general, these fungicides have specific modes of action and meet modem enviromnental

  20. Explaining the power-law distribution of human mobility through transportation modality decomposition

    Science.gov (United States)

    Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu

    2015-03-01

    Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.

  1. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    Science.gov (United States)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  2. The Coupling Structure Features Between (2,1) NTM and (1,1) Internal Mode in EAST Tokamak

    International Nuclear Information System (INIS)

    Shi Tonghui; Wan Baonian; Sun Youwen; Shen Biao; Qian Jinping; Hu Liqun; Chen Kaiyun; Liu Yong

    2015-01-01

    In the discharge of EAST tokamak, it is observed that (2,1) neoclassical tearing mode (NTM) is triggered by mode coupling with a (1,1) internal mode. Using singular value decomposition (SVD) method for soft X-ray emission and for electron cyclotron emission (ECE), the coupling spatial structures and coupling process between these two modes are analyzed in detail. The results of SVD for ECE reveal that the phase difference between these two modes equals to zero. This is consistent with the perfect coupling condition. Finally, performing statistical analysis of r 1/1 , ξ 1/1 and w 2/1 , we find that r 1/1 more accurately represents the coupling strength than ξ 1/1 , and r 1/1 is also strongly related to the (2,1) NTM triggering, where r 1/1 is the width of (1,1) internal mode, ξ 1/1 is the perturbed amplitude of (1,1) internal mode, and w 2/1 denot es the magnetic island width of (2,1) NTM. (paper)

  3. Modes of ocean variability in the tropical Pacific as derived from GEOSAT altimetry

    International Nuclear Information System (INIS)

    Zou Jiansheng

    1993-01-01

    Satellite-derived (GEOSAT) sea surface height anomalies for the period November 1986 to August 1989 were investigated in order to extract the dominant modes of climate variability in the tropical Pacific. Four modes are identified by applying the POP technique. The first mode has a time scale of about 3 months and can be identified with the first baroclinic equatorial Kelvin wave mode. The second mode has a time scale of about six months and describes the semi-annual cycle in tropical Pacific sea level. Equatorial wave propagation is also crucial for this mode. The third mode is the annual cycle which is dominated by Ekman dynamics. Wave propagation or reflection are found to be unimportant. The fourth mode is associated with the El Nino/Southern Oscillation (ENSO) phenomenon. The ENSO mode is found to be consistent with the 'delayed action oscillator' scenario. The results are substantiated by a companion analysis of the sea surface height variability simulated with an oceanic general circulation model (OGCM) forced by observed wind stresses for the period 1961 to 1989. The modal decomposition of the sea level variability is found to be similar to that derived from the GEOSAT data. The high consistency between the satellite and the model data indicates the high potential value of satellite altimetry for climate modeling and forecasting. (orig.)

  4. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  5. Gear Fault Detection Based on Teager-Huang Transform

    Directory of Open Access Journals (Sweden)

    Hui Li

    2010-01-01

    Full Text Available Gear fault detection based on Empirical Mode Decomposition (EMD and Teager Kaiser Energy Operator (TKEO technique is presented. This novel method is named as Teager-Huang transform (THT. EMD can adaptively decompose the vibration signal into a series of zero mean Intrinsic Mode Functions (IMFs. TKEO can track the instantaneous amplitude and instantaneous frequency of the Intrinsic Mode Functions at any instant. The experimental results provide effective evidence that Teager-Huang transform has better resolution than that of Hilbert-Huang transform. The Teager-Huang transform can effectively diagnose the fault of the gear, thus providing a viable processing tool for gearbox defect detection and diagnosis.

  6. Crop residue decomposition in Minnesota biochar amended plots

    OpenAIRE

    S. L. Weyers; K. A. Spokas

    2014-01-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with ...

  7. Wind turbine blades condition assessment based on vibration measurements and the level of an empirically decomposed feature

    International Nuclear Information System (INIS)

    Abouhnik, Abdelnasser; Albarbar, Alhussein

    2012-01-01

    Highlights: ► We used finite element method to model wind turbine induced vibration characteristics. ► We developed a technique for eliminating wind turbine’s vibration modulation problems. ► We use empirical mode decomposition to decompose the vibration into its fundamental elements. ► We show the area under shaft speed is a good indicator for assessing wind blades condition. ► We validate the technique under different wind turbine speeds and blade (cracks) conditions. - Abstract: Vibration based monitoring techniques are well understood and widely adopted for monitoring the condition of rotating machinery. However, in the case of wind turbines the measured vibration is complex due to the high number of vibration sources and modulation phenomenon. Therefore, extracting condition related information of a specific element e.g. blade condition is very difficult. In the work presented in this paper wind turbine vibration sources are outlined and then a three bladed wind turbine vibration was simulated by building its model in the ANSYS finite element program. Dynamic analysis was performed and the fundamental vibration characteristics were extracted under two healthy blades and one blade with one of four cracks introduced. The cracks were of length (10 mm, 20 mm, 30 mm and 40 mm), all had a consistent 3 mm width and 2 mm depth. The tests were carried out for three rotation speeds; 150, 250 and 360 r/min. The effects of the seeded faults were revealed by using a novel approach called empirically decomposed feature intensity level (EDFIL). The developed EDFIL algorithm is based on decomposing the measured vibration into its fundamental components and then determines the shaft rotational speed amplitude. A real model of the simulated wind turbine was constructed and the simulation outcomes were compared with real-time vibration measurements. The cracks were seeded sequentially in one of the blades and their presence and severity were determined by decomposing

  8. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  9. 1.7. Acid decomposition of kaolin clays of Ziddi Deposit. 1.7.1. The hydrochloric acid decomposition of kaolin clays and siallites

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Mirzoev, D.Kh.; Boboev, Kh.E.

    2016-01-01

    Present article of book is devoted to hydrochloric acid decomposition of kaolin clays and siallites. The chemical composition of kaolin clays and siallites was determined. The influence of temperature, process duration, acid concentration on hydrochloric acid decomposition of kaolin clays and siallites was studied. The optimal conditions of hydrochloric acid decomposition of kaolin clays and siallites were determined.

  10. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    Science.gov (United States)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  11. A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting

    International Nuclear Information System (INIS)

    Tang, Ling; Yu, Lean; Wang, Shuai; Li, Jianping; Wang, Shouyang

    2012-01-01

    Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.

  12. The study of Thai stock market across the 2008 financial crisis

    Science.gov (United States)

    Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik

    2016-11-01

    The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.

  13. Multi hollow needle to plate plasmachemical reactor for pollutant decomposition

    International Nuclear Information System (INIS)

    Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.

    2001-01-01

    Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)

  14. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  15. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  16. A handbook of decomposition methods in analytical chemistry

    International Nuclear Information System (INIS)

    Bok, R.

    1984-01-01

    Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references

  17. Driving of the solar p-modes by radiative pumping in the upper photosphere

    Science.gov (United States)

    Fontenla, Juan M.; Emslie, A. G.; Moore, Ronald L.

    1989-01-01

    It is shown that one viable driver of the solar p-modes is radiative pumping in the upper photosphere where the opacity is dominated by the negative hydrogen ion. This new option is suggested by the similar magnitudes of two energy flows that have been evaluated by independent empirical methods. The similarity indicates that the p-modes are radiatively pumped in the upper photosphere and therefore provide the required nonradiative cooling.

  18. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  19. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  20. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  1. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    Science.gov (United States)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual

  2. Models for Predicting Boundary Conditions in L-Mode Tokamak Plasma

    International Nuclear Information System (INIS)

    Siriwitpreecha, A.; Onjun, T.; Suwanna, S.; Poolyarat, N.; Picha, R.

    2009-07-01

    Full text: The models for predicting temperature and density of ions and electrons at boundary conditions in L-mode tokamak plasma are developed using an empirical approach and optimized against the experimental data obtained from the latest public version of the International Pedestal Database (version 3.2). It is assumed that the temperature and density at boundary of L-mode plasma are functions of engineering parameters such as plasma current, toroidal magnetic field, total heating power, line averaged density, hydrogenic particle mass (A H ), major radius, minor radius, and elongation at the separatrix. Multiple regression analysis is carried out for these parameters with 86 data points in L-mode from Aug (61) and JT60U (25). The RMSE of temperature and density at boundary of L-mode plasma are found to be 24.41% and 18.81%, respectively. These boundary models are implemented in BALDUR code, which will be used to simulate the L-mode plasma in the tokamak

  3. Complexity reduction of multi-phase flows in heterogeneous porous media

    KAUST Repository

    Ghommem, Mehdi

    2013-01-01

    In this paper, we apply mode decomposition and interpolatory projection methods to speed up simulations of two-phase flows in highly heterogeneous porous media. We propose intrusive and non-intrusive model reduction approaches that enable a significant reduction in the dimension of the flow problem size while capturing the behavior of the fully-resolved solutions. In one approach, we employ the dynamic mode decomposition (DMD) and the discrete empirical interpolation method (DEIM). This approach does not require any modification of the reservoir simulation code but rather postprocesses a set of global snapshots to identify the dynamically-relevant structures associated with the flow behavior. In a second approach, we project the governing equations of the velocity and the pressure fields on the subspace spanned by their proper orthogonal decomposition (POD) modes. Furthermore, we use DEIM to approximate the mobility related term in the global system assembly and then reduce the online computational cost and make it independent of the fine grid. To show the effectiveness and usefulness of the aforementioned approaches, we consider the SPE 10 benchmark permeability field and present a variety of numerical examples of two-phase flow and transport. The proposed model reduction methods can be efficiently used when performing uncertainty quantification or optimization studies and history matching.

  4. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  5. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  6. Carbon material formation on SBA-15 and Ni-SBA-15 and residue constituents during acetylene decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Hung-Lung, E-mail: hlchiang@mail.cmu.edu.tw [Department of Risk Management, China Medical University, Taichung 40402, Taiwan (China); Wu, Trong-Neng [Department of Public Health, China Medical University, Taichung 40402, Taiwan (China); Ho, Yung-Shou [Department of Applied Chemistry and Materials Science, Fooyin University, Kaohsiung 831, Taiwan (China); Zeng, Li-Xuan [Department of Risk Management, China Medical University, Taichung 40402, Taiwan (China)

    2014-07-15

    Highlights: • Acetylene was decomposed on SBA-15 and Ni-SBA-15 at 650–850 °C. • Carbon spheres and filaments were formed after acetylene decomposition. • PAHs were determined in tar and residues. • Exhaust constituents include CO{sub 2}, H{sub 2}, NO{sub x} and hydrocarbon species. - Abstract: Carbon materials including carbon spheres and nanotubes were formed from acetylene decomposition on hydrogen-reduced SBA-15 and Ni-SBA-15 at 650–850 °C. The physicochemical characteristics of SBA-15, Ni-SBA-15 and carbon materials were analyzed by field emission scanning electronic microscopy (FE-SEM), Raman spectrometry, and energy dispersive spectrometry (EDS). In addition, the contents of polyaromatic hydrocarbons (PAHs) in the tar and residue and volatile organic compounds (VOCs) in the exhaust were determined during acetylene decomposition on SBA-15 and Ni-SBA-15. Spherical carbon materials were observed on SBA-15 during acetylene decomposition at 750 and 850 °C. Carbon filaments and ball spheres were formed on Ni-SBA-15 at 650–850 °C. Raman spectroscopy revealed peaks at 1290 (D-band, disorder mode, amorphous carbon) and 1590 (G-band, graphite sp{sup 2} structure) cm{sup −1}. Naphthalene (2 rings), pyrene (4 rings), phenanthrene (3 rings), and fluoranthene (4 rings) were major PAHs in tar and residues. Exhaust constituents of hydrocarbon (as propane), H{sub 2}, and C{sub 2}H{sub 2} were 3.9–2.6/2.7–1.5, 1.4–2.8/2.6–4.3, 4.2–2.4/3.2–1.7% when acetylene was decomposed on SBA-15/Ni-SBA-15, respectively, corresponding to temperatures ranging from 650 to 850 °C. The concentrations of 52 VOCs ranged from 9359 to 5658 and 2488 to 1104 ppm for SBA-15 and Ni-SBA-15 respectively, at acetylene decomposition temperatures from 650 to 850 °C, and the aromatics contributed more than 87% fraction of VOC concentrations.

  7. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  8. Decomposition of forest products buried in landfills

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.

    2013-01-01

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  9. Decomposition of forest products buried in landfills

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)

    2013-11-15

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  10. Exposure assessment of one-year-old child to 3G tablet in uplink mode and to 3G femtocell in downlink mode using polynomial chaos decomposition

    International Nuclear Information System (INIS)

    Liorni, I; Parazzini, M; Ravazzani, P; Varsier, N; Hadjem, A; Wiart, J

    2016-01-01

    So far, the assessment of the exposure of children, in the ages 0–2 years old, to relatively new radio-frequency (RF) technologies, such as tablets and femtocells, remains an open issue. This study aims to analyse the exposure of a one year-old child to these two sources, tablets and femtocells, operating in uplink (tablet) and downlink (femtocell) modes, respectively. In detail, a realistic model of an infant has been used to model separately the exposures due to (i) a 3G tablet emitting at the frequency of 1940 MHz (uplink mode) placed close to the body and (ii) a 3G femtocell emitting at 2100 MHz (downlink mode) placed at a distance of at least 1 m from the infant body. For both RF sources, the input power was set to 250 mW. The variability of the exposure due to the variation of the position of the RF sources with respect to the infant body has been studied by stochastic dosimetry, based on polynomial chaos to build surrogate models of both whole-body and tissue specific absorption rate (SAR), which makes it easy and quick to investigate the exposure in a full range of possible positions of the sources. The major outcomes of the study are: (1) the maximum values of the whole-body SAR (WB SAR) have been found to be 9.5 mW kg −1 in uplink mode and 65 μW kg −1 in downlink mode, i.e. within the limits of the ICNIRP 1998 Guidelines; (2) in both uplink and downlink mode the highest SAR values were approximately found in the same tissues, i.e. in the skin, eye and penis for the whole-tissue SAR and in the bone, skin and muscle for the peak SAR; (3) the change in the position of both the 3G tablet and the 3G femtocell significantly influences the infant exposure. (paper)

  11. Exposure assessment of one-year-old child to 3G tablet in uplink mode and to 3G femtocell in downlink mode using polynomial chaos decomposition

    Science.gov (United States)

    Liorni, I.; Parazzini, M.; Varsier, N.; Hadjem, A.; Ravazzani, P.; Wiart, J.

    2016-04-01

    So far, the assessment of the exposure of children, in the ages 0-2 years old, to relatively new radio-frequency (RF) technologies, such as tablets and femtocells, remains an open issue. This study aims to analyse the exposure of a one year-old child to these two sources, tablets and femtocells, operating in uplink (tablet) and downlink (femtocell) modes, respectively. In detail, a realistic model of an infant has been used to model separately the exposures due to (i) a 3G tablet emitting at the frequency of 1940 MHz (uplink mode) placed close to the body and (ii) a 3G femtocell emitting at 2100 MHz (downlink mode) placed at a distance of at least 1 m from the infant body. For both RF sources, the input power was set to 250 mW. The variability of the exposure due to the variation of the position of the RF sources with respect to the infant body has been studied by stochastic dosimetry, based on polynomial chaos to build surrogate models of both whole-body and tissue specific absorption rate (SAR), which makes it easy and quick to investigate the exposure in a full range of possible positions of the sources. The major outcomes of the study are: (1) the maximum values of the whole-body SAR (WB SAR) have been found to be 9.5 mW kg-1 in uplink mode and 65 μW kg-1 in downlink mode, i.e. within the limits of the ICNIRP 1998 Guidelines; (2) in both uplink and downlink mode the highest SAR values were approximately found in the same tissues, i.e. in the skin, eye and penis for the whole-tissue SAR and in the bone, skin and muscle for the peak SAR; (3) the change in the position of both the 3G tablet and the 3G femtocell significantly influences the infant exposure.

  12. Parallel processing for pitch splitting decomposition

    Science.gov (United States)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  13. Statistical study of TCV disruptivity and H-mode accessibility

    International Nuclear Information System (INIS)

    Martin, Y.; Deschenaux, C.; Lister, J.B.; Pochelon, A.

    1997-01-01

    Optimising tokamak operation consists of finding a path, in a multidimensional parameter space, which leads to the desired plasma characteristics and avoids hazards regions. Typically the desirable regions are the domain where an L-mode to H-mode transition can occur, and then, in the H-mode, where ELMs and the required high density< y can be maintained. The regions to avoid are those with a high rate of disruptivity. On TCV, learning the safe and successful paths is achieved empirically. This will no longer be possible in a machine like ITER, since only a small percentage of disrupted discharges will be tolerable. An a priori knowledge of the hazardous regions in ITER is therefore mandatory. This paper presents the results of a statistical analysis of the occurrence of disruptions in TCV. (author) 4 figs

  14. TFTR L mode energy confinement related to deuterium influx

    International Nuclear Information System (INIS)

    Strachan, J.D.

    1999-01-01

    Tokamak energy confinement scaling in TFTR L mode and supershot regimes is discussed. The main result is that TFTR L mode plasmas fit the supershot scaling law for energy confinement. In both regimes, plasma transport coefficients increased with increased edge deuterium influx. The common L mode confinement scaling law on TFTR is also inversely proportional to the volume of wall material that is heated to a high temperature, possibly the temperature at which the deuterium sorbed in the material becomes detrapped and highly mobile. The deuterium influx is increased by: (a) increased beam power due to a deeper heated depth in the edge components and (b) decreased plasma current due to an increased wetted area as governed by the empirically observed dependence of the SOL width upon plasma current. (author). Letter-to-the-editor

  15. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  16. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  17. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    Science.gov (United States)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  18. Formation of volatile decomposition products by self-radiolysis of tritiated thymidine

    International Nuclear Information System (INIS)

    Shiba, Kazuhiro; Mori, Hirofumi

    1997-01-01

    In order to estimate the internal exposure dose in an experiment using tritiated thymidine, the rate of volatile 3 H-decomposition of several tritiated thymidine samples was measured. The decomposition rate of (methyl- 3 H)thymidine in water was over 80% in less than one year after initial analysis. (methyl- 3 H)thymidine was decomposed into volatile and non-volatile 3 H-decomposition products. The ratio of volatile 3 H-decomposition products increased with increasing the rate of the decomposition of (methyl- 3 H) thymidine. The volatile 3 H-decomposition products consisted of two components, of which the main component was tritiated water. Internal exposure dose caused by the inhalation of such volatile 3 H-decomposition products of (methyl- 3 H) thymidine was assumed to be several μSv. (author)

  19. Thermal decomposition studies on tri-iso-amyl phosphate in n-dodecane-nitric acid system

    International Nuclear Information System (INIS)

    Chandran, K.; Sreenivasalu, B.; Suresh, A.; Sivaraman, N.; Anthonysamy, S.

    2014-01-01

    Tri-iso-amyl Phosphate (TiAP) is a promising alternative solvent to TBP, with near similar extraction behaviour and physical properties but lower aqueous phase solubility and does not form third phase during the extraction of Pu(IV). In addition to the solubilised extractant, inadvertent entrainment of the extractant into the aqueous stream is a concern during the evaporation operation as the extractant comes into contact with higher nitric acid concentrations and metal nitrates. Hence the thermal decomposition behaviour of TiAP-HNO 3 systems has been studied using an adiabatic calorimeter in closed air ambience, under heat-wait-search mode (H-W-S)

  20. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...