WorldWideScience

Sample records for gini decomposition approach

  1. Gini coefficient as a life table function

    Directory of Open Access Journals (Sweden)

    2003-06-01

    Full Text Available This paper presents a toolkit for measuring and analyzing inter-individual inequality in length of life by Gini coefficient. Gini coefficient and four other inequality measures are defined on the length-of-life distribution. Properties of these measures and their empirical testing on mortality data suggest a possibility for different judgements about the direction of changes in the degree of inequality by using different measures. A new computational procedure for the estimation of Gini coefficient from life tables is developed and tested on about four hundred real life tables. The estimates of Gini coefficient are precise enough even for abridged life tables with the final age group of 85+. New formulae have been developed for the decomposition of differences between Gini coefficients by age and cause of death. A new method for decomposition of age-components into effects of mortality and composition of population by group is developed. Temporal changes in the effects of elimination of causes of death on Gini coefficient are analyzed. Numerous empirical examples show: Lorenz curves for Sweden, Russia and Bangladesh in 1995, proportional changes in Gini coefficient and four other measures of inequality for the USA in 1950-1995 and for Russia in 1959-2000. Further shown are errors of estimates of Gini coefficient when computed from various types of mortality data of France, Japan, Sweden and the USA in 1900-95, decompositions of the USA-UK difference in life expectancies and Gini coefficients by age and cause of death in 1997. As well, effects of elimination of major causes of death in the UK in 1951-96 on Gini coefficient, age-specific effects of mortality and educational composition of the Russian population on changes in life expectancy and Gini coefficient between 1979 and 1989. Illustrated as well are variations in life expectancy and Gini coefficient across 32 countries in 1996-1999 and associated changes in life expectancy and Gini

  2. Dealing with equality and benefit for water allocation in a lake watershed: A Gini-coefficient based stochastic optimization approach

    Science.gov (United States)

    Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.

    2018-06-01

    A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.

  3. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  4. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  5. Inequalities and Duality in Gene Coexpression Networks of HIV-1 Infection Revealed by the Combination of the Double-Connectivity Approach and the Gini's Method

    Directory of Open Access Journals (Sweden)

    Chuang Ma

    2011-01-01

    Full Text Available The symbiosis (Sym and pathogenesis (Pat is a duality problem of microbial infection, including HIV/AIDS. Statistical analysis of inequalities and duality in gene coexpression networks (GCNs of HIV-1 infection may gain novel insights into AIDS. In this study, we focused on analysis of GCNs of uninfected subjects and HIV-1-infected patients at three different stages of viral infection based on data deposited in the GEO database of NCBI. The inequalities and duality in these GCNs were analyzed by the combination of the double-connectivity (DC approach and the Gini's method. DC analysis reveals that there are significant differences between positive and negative connectivity in HIV-1 stage-specific GCNs. The inequality measures of negative connectivity and edge weight are changed more significantly than those of positive connectivity and edge weight in GCNs from the HIV-1 uninfected to the AIDS stages. With the permutation test method, we identified a set of genes with significant changes in the inequality and duality measure of edge weight. Functional analysis shows that these genes are highly enriched for the immune system, which plays an essential role in the Sym-Pat duality (SPD of microbial infections. Understanding of the SPD problems of HIV-1 infection may provide novel intervention strategies for AIDS.

  6. Mean-Gini Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    C. Sherman Cheung

    2007-05-01

    Full Text Available It is well known in the finance literature that mean-variance analysis is inappropriate when asset returns are not normally distributed or investors’ preferences of returns are not characterized by quadratic functions. The normality assumption has been widely rejected in cases of emerging market equities and hedge funds. The mean-Gini framework is an attractive alternative as it is consistent with stochastic dominance rules regardless of the probability distributions of asset returns. Applying mean-Gini to a portfolio setting involving multiple assets, however, has always been challenging to business students whose training in optimization is limited. This paper introduces a simple spreadsheet-based approach to mean-Gini portfolio optimization, thus allowing the mean-Gini concepts to be covered more effectively in finance courses such as portfolio theory and investment analysis.

  7. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  8. Measuring Resource Inequality: The Gini Coefficient

    Directory of Open Access Journals (Sweden)

    Michael T. Catalano

    2009-07-01

    Full Text Available This paper stems from work done by the authors at the Mathematics for Social Justice Workshop held in June of 2007 at Middlebury College. We provide a description of the Gini coefficient and some discussion of how it can be used to promote quantitative literacy skills in mathematics courses. The Gini Coefficient was introduced in 1921 by Italian statistician Corrado Gini as a measure of inequality. It is defined as twice the area between two curves. One, the Lorenz curve for a given population with respect to a given resource, represents the cumulative percentage of the resource as a function of the cumulative percentage of the population that shares that percentage of the resource. The second curve is the line y = x which is the Lorenz curve for a population which shares the resource equally. The Gini coefficient can be interpreted as the percentage of inequality represented in the population with respect to the given resource. We propose that the Gini coefficient can be used to enhance students’ understanding of calculus concepts and provide practice for students in using both calculus and quantitative literacy skills. Our examples are based mainly on distribution of energy resources using publicly available data from the Energy Information Agency of the United States Government. For energy resources within the United States, we find that by household, the Gini coefficient is 0.346, while using the 51 data points represented by the states and Washington D.C., the Gini coefficient is 0.158. When we consider the countries of the world as a population of 210, the Gini coefficient is 0.670. We close with ideas for questions which can be posed to students and discussion of the experiences two other mathematics instructors have had incorporating the Gini coefficient into pre-calculus-level mathematics classes.

  9. Decomposition approaches to integration without a measure

    Czech Academy of Sciences Publication Activity Database

    Greco, S.; Mesiar, Radko; Rindone, F.; Sipeky, L.

    2016-01-01

    Roč. 287, č. 1 (2016), s. 37-47 ISSN 0165-0114 Institutional support: RVO:67985556 Keywords : Choquet integral * Decision making * Decomposition integral Subject RIV: BA - General Mathematics Impact factor: 2.718, year: 2016 http://library.utia.cas.cz/separaty/2016/E/mesiar-0457408.pdf

  10. Income Inequality in Rural India: Decomposing the Gini by Income Sources

    OpenAIRE

    Mehtabul Azam; Abusaleh Shariff

    2011-01-01

    This paper examines income inequality in rural India in 1993 and 2005. It attempts to ascertain the contribution of different income sources to overall income inequality, and change in their relative importance between 1993 and 2005 through decomposition of Gini coefficient. The paper finds that income inequality has increased between 1993 and 2005. Agriculture income continues to contribute majorly in total income and income inequality; however its share in total income and total income ineq...

  11. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  12. The Determinants of Gini Coefficient in Iran Based on Bayesian Model Averaging

    Directory of Open Access Journals (Sweden)

    Mohsen Mehrara

    2015-03-01

    Full Text Available This paper has tried to apply BMA approach in order to investigate important influential variables on Gini coefficient in Iran over the period 1976-2010. The results indicate that the GDP growth is the most important variable affecting the Gini coefficient and has a positive influence on it. Also the second and third effective variables on Gini coefficient are respectively the ratio of government current expenditure to GDP and the ratio of oil revenue to GDP which lead to an increase in inequality. This result is corresponding with rentier state theory in Iran economy. Injection of massive oil revenue to Iran's economy and its high share of the state budget leads to inefficient government spending and an increase in rent-seeking activities in the country. Economic growth is possibly a result of oil revenue in Iran economy which has caused inequality in distribution of income.

  13. Comparison of the Gini and Zenga Indexes using Some Theoretical Income Distributions Abstract

    Directory of Open Access Journals (Sweden)

    Katarzyna Ostasiewicz

    2013-01-01

    Full Text Available The most common measure of inequality used in scientific research is the Gini index. In 2007, Zenga proposed a new index of inequality that has all the appropriate properties of an measure of equality. In this paper, we compared the Gini and Zenga indexes, calculating these quantities for the few distributions frequently used for approximating distributions of income, that is, the lognormal, gamma, inverse Gauss, Weibull and Burr distributions. Within this limited examination, we have observed three main differences. First, the Zenga index increases more rapidly for low values of the variation and decreases more slowly when the variation approaches intermediate values from above. Second, the Zenga index seems to be better predicted by the variation. Third, although the Zenga index is always higher than the Gini one, the ordering of some pairs of cases may be inverted. (original abstract

  14. Linear decomposition approach for a class of nonconvex programming problems.

    Science.gov (United States)

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  15. The Bias of the Gini Coefficient due to Grouping

    NARCIS (Netherlands)

    T.G.M. van Ourti (Tom); Ph. Clarke (Philip)

    2008-01-01

    textabstractWe propose a first order bias correction term for the Gini index to reduce the bias due to grouping. The first order correction term is obtained from studying the estimator of the Gini index within a measurement error framework. In addition, it reveals an intuitive formula for the

  16. Using the Gini Coefficient for Bug Prediction in Eclipse

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    The Gini coefficient is a prominent measure to quantify the inequality of a distribution. It is often used in the field of economy to describe how goods, e.g., wealth or farmland, are distributed among people. We use the Gini coefficient to measure code ownership by investigating how changes made to

  17. Foreign exchange predictability and the carry trade: a decomposition approach

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Gospodinov, N.; Jamali, I.; Liu, X.

    2017-01-01

    Roč. 42, June (2017), s. 199-211 ISSN 0927-5398 Institutional support: RVO:67985998 Keywords : exchange rate forecasting * carry trade * return decomposition Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 0.979, year: 2016

  18. Multidimensional Decomposition of the Sen Index: Some Further Thoughts

    OpenAIRE

    Stéphane Mussard; Kuan Xu

    2006-01-01

    Given the multiplicative decomposition of the Sen index into three commonly used poverty statistics – the poverty rate (poverty incidence), poverty gap ratio (poverty depth) and 1 plus the Gini index of poverty gap ratios of the poor (inequality of poverty) – the index becomes much easier to use and to interpret for economists, policy analysts and decision makers. Based on the recent findings on simultaneous subgroup and source decomposition of the Gini index, we examine possible further deco...

  19. Foreign exchange predictability and the carry trade: a decomposition approach

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Gospodinov, N.; Jamali, I.; Liu, X.

    2017-01-01

    Roč. 42, June (2017), s. 199-211 ISSN 0927-5398 Institutional support: Progres-Q24 Keywords : exchange rate forecasting * carry trade * return decomposition Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 0.979, year: 2016

  20. CORRADO GINI AND THE SCIENTIFIC BASIS OF FASCIST RACISM.

    Science.gov (United States)

    Macuglia, Daniele

    2014-01-01

    It is controversial whether the development of Fascist racism was influenced by earlier Italian eugenic research. Before the First International Eugenics Congress held in London in 1912, Italian eugenics was not characterized by a clear program of scientific research. With the advent of Fascism, however, the equality "number = strength" became the foundation of its program. This idea, according to which the improvement of a nation relies on the amplitude of its population, was conceived by statistician Corrado Gini (1884-1965) already in 1912. Focusing on the problem of the degeneration of the Italian race, Gini had a tremendous influence on Benito Mussolini's (1883-1945) political campaign, and shaped Italian social sciences for almost two decades. He was also a committed racist, as documented by a series of indisputable statements from the primary literature. All these findings place Gini in a linking position among early Italian eugenics, Fascism and official state racism.

  1. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  2. What drives credit rating changes? : a return decomposition approach

    OpenAIRE

    Cho, Hyungjin; Choi, Sun Hwa

    2015-01-01

    This paper examines the relative importance of a shock to expected cash flows (i.e., cash-flow news) and a shock to expected discount rates (i.e., discount-rate news) in credit rating changes. Specifically, we use a Vector Autoregressive model to implement the return decomposition of Campbell and Shiller (Review of Financial Studies, 1, 1988, 195) and Vuolteenaho (Journal of Finance, 57, 2002, 233) to extract cash-flow news and discount-rate news from stock returns at the firm-level. We find ...

  3. Simplified approaches to some nonoverlapping domain decomposition methods

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  4. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  5. THE IMPACT OF TAXES MEASURED BY GINI INDEX IN MACEDONIA

    Directory of Open Access Journals (Sweden)

    Sasho Kozuharov

    2015-06-01

    Full Text Available The past decades the problem of income inequality and welfare segregation has presented itself as one of the biggest faults for modern economic systems. Republic of Macedonia as a country in development is presented with a serious challenge into decreasing the income inequality witch has risen for average 4% annually over the past 15 years, according to the GINI index. The problem of income inequality for Republic of Macedonia starches further as the country presents itself as one the highest ranking of income inequality in comparison the South-East European countries. The impact of different types of taxes on the income inequality in Republic of Macedonia measured through the GINI index, the econometric model of regression and correlation was conducted towards determination the type of tax that has the most impact on the income inequality in Republic of Macedonia for the observational period, the Personal income tax have the utmost impact on the income inequality measured through the GINI index.

  6. An Improved Dynamic Programming Decomposition Approach for Network Revenue Management

    OpenAIRE

    Dan Zhang

    2011-01-01

    We consider a nonlinear nonseparable functional approximation to the value function of a dynamic programming formulation for the network revenue management (RM) problem with customer choice. We propose a simultaneous dynamic programming approach to solve the resulting problem, which is a nonlinear optimization problem with nonlinear constraints. We show that our approximation leads to a tighter upper bound on optimal expected revenue than some known bounds in the literature. Our approach can ...

  7. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  8. The Hierarchical Database Decomposition Approach to Database Concurrency Control.

    Science.gov (United States)

    1984-12-01

    approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request

  9. A new approach for the beryl mineral decomposition: elemental characterisation using ICP-AES and FAAS

    International Nuclear Information System (INIS)

    Nathan, Usha; Premadas, A.

    2013-01-01

    A new approach for the beryl mineral sample decomposition and solution preparation method suitable for the elemental analysis using ICP-AES and FAAS is described. For the complete sample decomposition four different decomposition procedures are employed such as with (i) ammonium bi-fluoride alone (ii) a mixture of ammonium bi-fluoride and ammonium sulphate (iii) powdered mixture of NaF and KHF 2 in 1: 3 ratio, and (iv) acid digestion treatment using hydrofluoric acid and nitric acid mixture, and the residue fused with a powdered mixture NaF and KHF 2 . Elements like Be, Al, Fe, Mn, Ti, Cr, Ca, Mg, and Nb are determined by ICP-AES and Na, K, Rb and Cs are determined by FAAS method. Fusion with 2g ammonium bifluoride flux alone is sufficient for the complete decomposition of 0.400 gram sample. The values obtained by this decomposition procedure are agreed well with the reported method. Accuracy of the proposed method was checked by analyzing synthetic samples prepared in the laboratory by mixing high purity oxides having a chemical composition similar to natural beryl mineral. It indicates that the accuracy of the method is very good, and the reproducibility is characterized by the RSD 1 to 4% for the elements studied. (author)

  10. Simultaneously Exploiting Two Formulations: an Exact Benders Decomposition Approach

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Gamst, Mette; Spoorendonk, Simon

    When modelling a given problem using linear programming techniques several possibilities often exist, and each results in a different mathematical formulation of the problem. Usually, advantages and disadvantages can be identified in any single formulation. In this paper we consider mixed integer...... to the standard branch-and-price approach from the literature, the method shows promising performance and appears to be an attractive alternative....

  11. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  12. Design of tailor-made chemical blend using a decomposition-based computer-aided approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.

    2011-01-01

    Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...

  13. Introducing the Improved Heaviside Approach to Partial Fraction Decomposition to Undergraduate Students: Results and Implications from a Pilot Study

    Science.gov (United States)

    Man, Yiu-Kwong

    2012-01-01

    Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…

  14. Correlation between the Gini index and the observed prosperity

    Science.gov (United States)

    Mazin, Igor

    2006-03-01

    It has been well established by computer simulations that a free, unregulated market economy (in the simplest model of a yard sale economy) is unstable and collapses to a singular wealth distribution. It is now a common procedure in computer simulations to stabilize a model by favoring the poorer partner in each transaction, or by redistributing the wealth in the society in favor of the poorer part of the population. Such measures stabilize the economy and create a stationary state with a Gini index Gparity purchasing power) for all countries in the world against their Gini indices, and found that they all (with only 2 outliers) fall into one of two groups: ``wealthy'' countries with PPP>10,000/year, and the rest. The former are characterized by G=0.29±0.07, and the latter by a uniform distribution of all possible Gs. This means that an enforced wealth redistribution is not a moral act of social consciousness, but a necessary precondition for a sustainable economy. The existence of an optimal G is illustrated through a simple model of a yard sale economy with taxation.

  15. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  16. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  17. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  18. A Benders decomposition approach for a combined heat and power economic dispatch

    International Nuclear Information System (INIS)

    Abdolmohammadi, Hamid Reza; Kazemi, Ahad

    2013-01-01

    Highlights: • Benders decomposition algorithm to solve combined heat and power economic dispatch. • Decomposing the CHPED problem into master problem and subproblem. • Considering non-convex heat-power feasible region efficiently. • Solving 4 units and 5 units system with 2 and 3 co-generation units, respectively. • Obtaining better or as well results in terms of objective values. - Abstract: Recently, cogeneration units have played an increasingly important role in the utility industry. Therefore the optimal utilization of multiple combined heat and power (CHP) systems is an important optimization task in power system operation. Unlike power economic dispatch, which has a single equality constraint, two equality constraints must be met in combined heat and power economic dispatch (CHPED) problem. Moreover, in the cogeneration units, the power capacity limits are functions of the unit heat productions and the heat capacity limits are functions of the unit power generations. Thus, CHPED is a complicated optimization problem. In this paper, an algorithm based on Benders decomposition (BD) is proposed to solve the economic dispatch (ED) problem for cogeneration systems. In the proposed method, combined heat and power economic dispatch problem is decomposed into a master problem and subproblem. The subproblem generates the Benders cuts and master problem uses them as a new inequality constraint which is added to the previous constraints. The iterative process will continue until upper and lower bounds of the objective function optimal values are close enough and a converged optimal solution is found. Benders decomposition based approach is able to provide a good framework to consider the non-convex feasible operation regions of cogeneration units efficiently. In this paper, a four-unit system with two cogeneration units and a five-unit system with three cogeneration units are analyzed to exhibit the effectiveness of the proposed approach. In all cases, the

  19. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  20. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  1. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  2. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  3. A domain decomposition approach for full-field measurements based identification of local elastic parameters

    KAUST Repository

    Lubineau, Gilles

    2015-03-01

    We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.

  4. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  5. Feeding ducks, bacterial chemotaxis, and the Gini index

    Science.gov (United States)

    Peaudecerf, François J.; Goldstein, Raymond E.

    2015-08-01

    Classic experiments on the distribution of ducks around separated food sources found consistency with the "ideal free" distribution in which the local population is proportional to the local supply rate. Motivated by this experiment and others, we examine the analogous problem in the microbial world: the distribution of chemotactic bacteria around multiple nearby food sources. In contrast to the optimization of uptake rate that may hold at the level of a single cell in a spatially varying nutrient field, nutrient consumption by a population of chemotactic cells will modify the nutrient field, and the uptake rate will generally vary throughout the population. Through a simple model we study the distribution of resource uptake in the presence of chemotaxis, consumption, and diffusion of both bacteria and nutrients. Borrowing from the field of theoretical economics, we explore how the Gini index can be used as a means to quantify the inequalities of uptake. The redistributive effect of chemotaxis can lead to a phenomenon we term "chemotactic levelling," and the influence of these results on population fitness are briefly considered.

  6. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  7. Dominant pole placement with fractional order PID controllers: D-decomposition approach.

    Science.gov (United States)

    Mandić, Petar D; Šekara, Tomislav B; Lazarević, Mihailo P; Bošković, Marko

    2017-03-01

    Dominant pole placement is a useful technique designed to deal with the problem of controlling a high order or time-delay systems with low order controller such as the PID controller. This paper tries to solve this problem by using D-decomposition method. Straightforward analytic procedure makes this method extremely powerful and easy to apply. This technique is applicable to a wide range of transfer functions: with or without time-delay, rational and non-rational ones, and those describing distributed parameter systems. In order to control as many different processes as possible, a fractional order PID controller is introduced, as a generalization of classical PID controller. As a consequence, it provides additional parameters for better adjusting system performances. The design method presented in this paper tunes the parameters of PID and fractional PID controller in order to obtain good load disturbance response with a constraint on the maximum sensitivity and sensitivity to noise measurement. Good set point response is also one of the design goals of this technique. Numerous examples taken from the process industry are given, and D-decomposition approach is compared with other PID optimization methods to show its effectiveness. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A demodulating approach based on local mean decomposition and its applications in mechanical fault diagnosis

    International Nuclear Information System (INIS)

    Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang

    2011-01-01

    Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery

  9. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  10. A data-driven decomposition approach to model aerodynamic forces on flapping airfoils

    Science.gov (United States)

    Raiola, Marco; Discetti, Stefano; Ianiro, Andrea

    2017-11-01

    In this work, we exploit a data-driven decomposition of experimental data from a flapping airfoil experiment with the aim of isolating the main contributions to the aerodynamic force and obtaining a phenomenological model. Experiments are carried out on a NACA 0012 airfoil in forward flight with both heaving and pitching motion. Velocity measurements of the near field are carried out with Planar PIV while force measurements are performed with a load cell. The phase-averaged velocity fields are transformed into the wing-fixed reference frame, allowing for a description of the field in a domain with fixed boundaries. The decomposition of the flow field is performed by means of the POD applied on the velocity fluctuations and then extended to the phase-averaged force data by means of the Extended POD approach. This choice is justified by the simple consideration that aerodynamic forces determine the largest contributions to the energetic balance in the flow field. Only the first 6 modes have a relevant contribution to the force. A clear relationship can be drawn between the force and the flow field modes. Moreover, the force modes are closely related (yet slightly different) to the contributions of the classic potential models in literature, allowing for their correction. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P.

  11. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data

    Directory of Open Access Journals (Sweden)

    Himmelreich Uwe

    2009-07-01

    Full Text Available Abstract Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.

  12. Periodic oscillatory solution in delayed competitive-cooperative neural networks: A decomposition approach

    International Nuclear Information System (INIS)

    Yuan Kun; Cao Jinde

    2006-01-01

    In this paper, the problems of exponential convergence and the exponential stability of the periodic solution for a general class of non-autonomous competitive-cooperative neural networks are analyzed via the decomposition approach. The idea is to divide the connection weights into inhibitory or excitatory types and thereby to embed a competitive-cooperative delayed neural network into an augmented cooperative delay system through a symmetric transformation. Some simple necessary and sufficient conditions are derived to ensure the componentwise exponential convergence and the exponential stability of the periodic solution of the considered neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice

  13. Measuring resource inequalities. The concepts and methodology for an area-based Gini coefficient

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Although inequalities in income and expenditure are relatively well researched, comparatively little attention has been paid, to date, to inequalities in resource use. This is clearly a shortcoming when it comes to developing informed policies for sustainable consumption and social justice. This paper describes an indicator of inequality in resource use called the AR-Gini. The AR-Gini is an area-based measure of resource inequality that estimates inequalities between neighbourhoods with regard to the consumption of specific consumer goods. It is also capable of estimating inequalities in the emissions resulting from resource use, such as carbon dioxide emissions from energy use, and solid waste arisings from material resource use. The indicator is designed to be used as a basis for broadening the discussion concerning 'food deserts' to inequalities in other types of resource use. By estimating the AR-Gini for a wide range of goods and services we aim to enhance our understanding of resource inequalities and their drivers, identify which resources have highest inequalities, and to explore trends in inequalities. The paper describes the concepts underlying the construction of the AR-Gini and its methodology. Its use is illustrated by pilot applications (specifically, men's and boys' clothing, carpets, refrigerators/freezers and clothes washer/driers). The results illustrate that different levels of inequality are associated with different commodities. The paper concludes with a brief discussion of some possible policy implications of the AR-Gini. (author)

  14. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI

    Energy Technology Data Exchange (ETDEWEB)

    Viehweger, Adrian; Sorge, Ina; Hirsch, Wolfgang [University Hospital Leipzig, Department of Pediatric Radiology, Leipzig (Germany); Riffert, Till; Dhital, Bibek; Knoesche, Thomas R.; Anwander, Alfred [Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig (Germany); Stepan, Holger [University Leipzig, Department of Obstetrics, Leipzig (Germany)

    2014-10-15

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm{sup 2}. Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R{sup 2} = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state. (orig.)

  15. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI

    International Nuclear Information System (INIS)

    Viehweger, Adrian; Sorge, Ina; Hirsch, Wolfgang; Riffert, Till; Dhital, Bibek; Knoesche, Thomas R.; Anwander, Alfred; Stepan, Holger

    2014-01-01

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm 2 . Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R 2 = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state. (orig.)

  16. Quantifying the effect of plant growth on litter decomposition using a novel, triple-isotope label approach

    Science.gov (United States)

    Ernakovich, J. G.; Baldock, J.; Carter, T.; Davis, R. A.; Kalbitz, K.; Sanderman, J.; Farrell, M.

    2017-12-01

    Microbial degradation of plant detritus is now accepted as a major stabilizing process of organic matter in soils. Most of our understanding of the dynamics of decomposition come from laboratory litter decay studies in the absence of plants, despite the fact that litter decays in the presence of plants in many native and managed systems. There is growing evidence that living plants significantly impact the degradation and stabilization of litter carbon (C) due to changes in the chemical and physical nature of soils in the rhizosphere. For example, mechanistic studies have observed stimulatory effects of root exudates on litter decomposition, and greenhouse studies have shown that living plants accelerate detrital decay. Despite this, we lack a quantitative understanding of the contribution of living plants to litter decomposition and how interactions of these two sources of C build soil organic matter (SOM). We used a novel triple-isotope approach to determine the effect of living plants on litter decomposition and C cycling. In the first stage of the experiment, we grew a temperate grass commonly used for forage, Poa labillardieri, in a continuously-labelled atmosphere of 14CO2 fertilized with K15NO3, such that the grass biomass was uniformly labelled with 14C and 15N. In the second stage, we constructed litter decomposition mescososms with and without a living plant to test for the effect of a growing plant on litter decomposition. The 14C/15N litter was decomposed in a sandy clay loam while a temperate forage grass, Lolium perenne, grew in an atmosphere of enriched 13CO2. The fate of the litter-14C/15N and plant-13C was traced into soil mineral fractions and dissolved organic matter (DOM) over the course of nine weeks using four destructive harvests of the mesocosms. Our preliminary results suggest that living plants play a major role in the degradation of plant litter, as litter decomposition was greater, both in rate and absolute amount, for soil mesocosms

  17. Multi-country comparisons of energy performance: The index decomposition analysis approach

    International Nuclear Information System (INIS)

    Ang, B.W.; Xu, X.Y.; Su, Bin

    2015-01-01

    Index decomposition analysis (IDA) is a popular tool for studying changes in energy consumption over time in a country or region. This specific application of IDA, which may be called temporal decomposition analysis, has been extended by researchers and analysts to study variations in energy consumption or energy efficiency between countries or regions, i.e. spatial decomposition analysis. In spatial decomposition analysis, the main objective is often to understand the relative contributions of overall activity level, activity structure, and energy intensity in explaining differences in total energy consumption between two countries or regions. We review the literature of spatial decomposition analysis, investigate the methodological issues, and propose a spatial decomposition analysis framework for multi-region comparisons. A key feature of the proposed framework is that it passes the circularity test and provides consistent results for multi-region comparisons. A case study in which 30 regions in China are compared and ranked based on their performance in energy consumption is presented. - Highlights: • We conducted cross-regional comparisons of energy consumption using IDA. • We proposed two criteria for IDA method selection in spatial decomposition analysis. • We proposed a new model for regional comparison that passes the circularity test. • Features of the new model are illustrated using the data of 30 regions in China

  18. Synthesis and Characterization of Sb2S3 Nanorods via Complex Decomposition Approach

    Directory of Open Access Journals (Sweden)

    Abdolali Alemi

    2011-01-01

    Full Text Available Based on the complex decomposition approach, a simple hydrothermal method has been developed for the synthesizing of Sb2S3 nanorods with high yield in 24 h at 150∘C. The powder X-ray diffraction pattern shows the Sb2S3 crystals belong to the orthorhombic phase with calculated lattice parameters a=1.120 nm, b=1.128 nm, and c=0.383 nm. The quantification of energy dispersive X-ray spectrometric analysis peaks give an atomic ratio of 2 : 3 for Sb : S. TEM and SEM studies reveal that the appearance of the as-prepared Sb2S3 is rod-like which is composed of nanorods with the typical width of 30–160 nm and length of up to 6 μm. High-resolution transmission electron microscopic (HRTEM studies reveal that the Sb2S3 is oriented in the [10-1] growth direction. The band gap calculated from the absorption spectra is found to be 3.29 ev, indicating a considerable blue shift relative to the bulk. The formation mechanism of Sb2S3 nanostructures is proposed.

  19. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    Energy Technology Data Exchange (ETDEWEB)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.

  20. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  1. Partial information decomposition as a unified approach to the specification of neural goal functions.

    Science.gov (United States)

    Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A

    2017-03-01

    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that

  2. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    Science.gov (United States)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  3. A simple correction to remove the bias of the gini coefficient due to grouping

    NARCIS (Netherlands)

    T.G.M. van Ourti (Tom); Ph. Clarke (Philip)

    2011-01-01

    textabstractAbstract-We propose a first-order bias correction term for the Gini index to reduce the bias due to grouping. It depends on only the number of individuals in each group and is derived from a measurement error framework. We also provide a formula for the remaining second-order bias. Both

  4. Hydrophobicity diversity in globular and nonglobular proteins measured with the Gini index.

    Science.gov (United States)

    Carugo, Oliviero

    2017-12-01

    Amino acids and their properties are variably distributed in proteins and different compositions determine all protein features, ranging from solubility to stability and functionality. Gini index, a tool to estimate distribution uniformity, is widely used in macroeconomics and has numerous statistical applications. Here, Gini index is used to analyze the distribution of hydrophobicity in proteins and to compare hydrophobicity distribution in globular and intrinsically disordered proteins. Based on the analysis of carefully selected high-quality data sets of proteins extracted from the Protein Data Bank (http://www.rcsb.org) and from the DisProt database (http://www.disprot.org/), it is observed that hydrophobicity is distributed in a more diverse way in intrinsically disordered proteins than in folded and soluble globular proteins. This correlates with the observation that the amino acid composition deviates from the uniformity (estimate with the Shannon and the Gini-Simpson indices) more in intrinsically disordered proteins than in globular and soluble proteins. Although statistical tools tike the Gini index have received little attention in molecular biology, these results show that they allow one to estimate sequence diversity and that they are useful to delineate trends that can hardly be described, otherwise, in simple and concise ways. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Improvement of kurtosis-guided-grams via Gini index for bearing fault feature identification

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing

    2017-12-01

    A group of kurtosis-guided-grams, such as Kurtogram, Protrugram and SKRgram, is designed to detect the resonance band excited by faults based on the sparsity index. However, a common issue associated with these methods is that they tend to choose the frequency band with individual impulses rather than the desired fault impulses. This may be attributed to the selection of the sparsity index, kurtosis, which is vulnerable to impulsive noise. In this paper, to solve the problem, a sparsity index, called the Gini index, is introduced as an alternative estimator for the selection of the resonance band. It has been found that the sparsity index is still able to provide guidelines for the selection of the fault band without prior information of the fault period. More importantly, the Gini index has unique performance in random-impulse resistance, which renders the improved methods using the index free from the random impulse caused by external knocks on the bearing housing, or electromagnetic interference. By virtue of these advantages, the improved methods using the Gini index not only overcome the shortcomings but are more effective under harsh working conditions, even in the complex structure. Finally, the comparison between the kurtosis-guided-grams and the improved methods using the Gini index is made using the simulated and experimental data. The results verify the effectiveness of the improvement by both the fixed-axis bearing and planetary bearing fault signals.

  6. Lorenz curve and Gini coefficient reveal hot spots and hot moments for nitrous oxide emissions

    Science.gov (United States)

    Identifying hot spots and hot moments of N2O emissions in the landscape is critical for monitoring and mitigating the emission of this powerful greenhouse gas. We propose a novel use of the Lorenz curve and Gini coefficient (G) to quantify the heterogeneous distribution of N2O emissions from a lands...

  7. Determinates of clustering across America's national parks: An application of the Gini coefficients

    Science.gov (United States)

    R. Geoffrey Lacher; Matthew T.J. Brownlee

    2012-01-01

    The changes in the clustering of visitation across National Park Service (NPS) sites have not been well documented or widely studied. This paper investigates the changes in the dispersion of visitation across NPS sites with the Gini coefficient, a popular measure of inequality used primarily in the field of economics. To calculate the degree of clustering nationally,...

  8. An interpretation of the Gini coefficient in a Stiglitz two-type optimal tax problem

    DEFF Research Database (Denmark)

    Rasmussen, Bo Sandemann

    2015-01-01

    In a two-type Stiglitz (1982) model of optimal non-linear taxation it is shown that when the utility function relating to consumption is logaritmic the shadow price of the incentive constraint relating to the optimal tax problem exactly equals the Gini coefficient of the second-best optimal income...

  9. A novel thermal decomposition approach for the synthesis of silica-iron oxide core–shell nanoparticles

    International Nuclear Information System (INIS)

    Kishore, P.N.R.; Jeevanandam, P.

    2012-01-01

    Highlights: ► Silica-iron oxide core–shell nanoparticles have been synthesized by a novel thermal decomposition approach. ► The silica-iron oxide core–shell nanoparticles are superparamagnetic at room temperature. ► The silica-iron oxide core–shell nanoparticles serve as good photocatalyst for the degradation of Rhodamine B. - Abstract: A simple thermal decomposition approach for the synthesis of magnetic nanoparticles consisting of silica as core and iron oxide nanoparticles as shell has been reported. The iron oxide nanoparticles were deposited on the silica spheres (mean diameter = 244 ± 13 nm) by the thermal decomposition of iron (III) acetylacetonate, in diphenyl ether, in the presence of SiO 2 . The core–shell nanoparticles were characterized by X-ray diffraction, infrared spectroscopy, field emission-scanning electron microscopy coupled with energy dispersive X-ray analysis, transmission electron microscopy, diffuse reflectance spectroscopy, and magnetic measurements. The results confirm the presence of iron oxide nanoparticles on the silica core. The core–shell nanoparticles are superparamagnetic at room temperature indicating the presence of iron oxide nanoparticles on silica. The core–shell nanoparticles have been demonstrated as good photocatalyst for the degradation of Rhodamine B.

  10. Decomposition of environmentally persistent perfluorooctanoic acid in water by photochemical approaches.

    Science.gov (United States)

    Hori, Hisao; Hayakawa, Etsuko; Einaga, Hisahiro; Kutsuna, Shuzo; Koike, Kazuhide; Ibusuki, Takashi; Kiatagawa, Hiroshi; Arakawa, Ryuichi

    2004-11-15

    The decomposition of persistent and bioaccumulative perfluorooctanoic acid (PFOA) in water by UV-visible light irradiation, by H202 with UV-visible light irradiation, and by a tungstic heteropolyacid photocatalyst was examined to develop a technique to counteract stationary sources of PFOA. Direct photolysis proceeded slowly to produce CO2, F-, and short-chain perfluorocarboxylic acids. Compared to the direct photolysis, H2O2 was less effective in PFOA decomposition. On the other hand, the heteropolyacid photocatalyst led to efficient PFOA decomposition and the production of F- ions and CO2. The photocatalyst also suppressed the accumulation of short-chain perfluorocarboxylic acids in the reaction solution. PFOA in the concentrations of 0.34-3.35 mM, typical of those in wastewaters after an emulsifying process in fluoropolymer manufacture, was completely decomposed by the catalyst within 24 h of irradiation from a 200-W xenon-mercury lamp, with no accompanying catalyst degradation, permitting the catalyst to be reused in consecutive runs. Gas chromatography/mass spectrometry (GC/MS) measurements showed no trace of environmentally undesirable species such as CF4, which has a very high global-warming potential. When the (initial PFOA)/(initial catalyst) molar ratio was 10: 1, the turnover number for PFOA decomposition reached 4.33 over 24 h of irradiation.

  11. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  12. A 3D domain decomposition approach for the identification of spatially varying elastic material parameters

    KAUST Repository

    Moussawi, Ali

    2015-02-24

    Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.

  13. Decomposition of Changes in Earnings Inequality in China: A Distributional Approach

    OpenAIRE

    Chi, Wei; Li, Bo; Yu, Qiumei

    2007-01-01

    Using the nationwide household data, this study examines the changes in the Chinese urban income distributions from 1987 to 1996 and from 1996 to 2004, and investigates the causes of these changes. The Oaxaca-Blinder decomposition method is applied to decomposing the mean earnings increases, and the Firpo-Fortin-Lemieux method based upon a recentered influence function is used to decompose the changes in the income distribution and the inequality measures such as the variance and the 10-90 r...

  14. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    Science.gov (United States)

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  15. Mechanistic approach for the kinetics of the decomposition of nitrous oxide over calcined hydrotalcites

    Energy Technology Data Exchange (ETDEWEB)

    Dandl, H.; Emig, G. [Lehrstuhl fuer Technische Chemie I, Erlangen (Germany)

    1998-03-27

    A highly active catalyst for the decomposition of N{sub 2}O was prepared by the thermal treatment of CoLaAl-hydrotalcite. For this catalyst the reaction rate was determined at various partial pressures of N{sub 2}O, O{sub 2} and H{sub 2}O in a temperature range from 573K to 823K. The kinetic simulation resulted in a mechanistic model. The energies of activation and rate coefficients are estimated for the main steps of the reaction

  16. Employing the Gini coefficient to measure participation inequality in treatment-focused Digital Health Social Networks.

    Science.gov (United States)

    van Mierlo, Trevor; Hyatt, Douglas; Ching, Andrew T

    2016-01-01

    Digital Health Social Networks (DHSNs) are common; however, there are few metrics that can be used to identify participation inequality. The objective of this study was to investigate whether the Gini coefficient, an economic measure of statistical dispersion traditionally used to measure income inequality, could be employed to measure DHSN inequality. Quarterly Gini coefficients were derived from four long-standing DHSNs. The combined data set included 625,736 posts that were generated from 15,181 actors over 18,671 days. The range of actors (8-2323), posts (29-28,684), and Gini coefficients (0.15-0.37) varied. Pearson correlations indicated statistically significant associations between number of actors and number of posts (0.527-0.835, p  addiction networks (0.619 and 0.276, p  networks ( t  = -4.305 and -5.934, p  network engagement. Further, mixed-methods research investigating quantitative performance metrics is required.

  17. Single step thermal decomposition approach to prepare supported γ-Fe2O3 nanoparticles

    International Nuclear Information System (INIS)

    Sharma, Geetu; Jeevanandam, P.

    2012-01-01

    γ-Fe 2 O 3 nanoparticles supported on MgO (macro-crystalline and nanocrystalline) were prepared by an easy single step thermal decomposition method. Thermal decomposition of iron acetylacetonate in diphenyl ether, in the presence of the supports followed by calcination, leads to iron oxide nanoparticles supported on MgO. The X-ray diffraction results indicate the stability of γ-Fe 2 O 3 phase on MgO (macro-crystalline and nanocrystalline) up to 1150 °C. The scanning electron microscopy images show that the supported iron oxide nanoparticles are agglomerated while the energy dispersive X-ray analysis indicates the presence of iron, magnesium and oxygen in the samples. Transmission electron microscopy images indicate the presence of smaller γ-Fe 2 O 3 nanoparticles on nanocrystalline MgO. The magnetic properties of the supported magnetic nanoparticles at various calcination temperatures (350-1150 °C) were studied using a superconducting quantum interference device which indicates superparamagnetic behavior.

  18. Approaches to understanding the semi-stable phase of litter decomposition

    Science.gov (United States)

    Preston, C. M.; Trofymow, J. A.

    2012-12-01

    The slowing or even apparent cessation of litter decomposition with time has been widely observed, but causes remain poorly understood. We examine the question in part through data from CIDET (the Canadian Intersite Decomposition Experiment) for 10 foliar litters at one site with MAT 6.7 degrees C. The initial rapid C loss in the first year for some litters is followed by a second phase (1-7y) with decay rates from 0.21-0.79/y, influenced by initial litter chemistry especially the ratio AUR/N (acid-unhydrolyzable residue, negative). By contrast, 10-23% of the initial litter C mass entered the semi-stable decay phase (>7 y) with modeled decay rates of 0.0021-0.0035/y. The slowing and convergence of k values was similar to trends in chemical composition. From 7-12 y, concentrations of Ca, Mg, K, P, Mn and Zn generally declined and became more similar among litters, and total N converged around 20 mg/g. Non-polar and water-soluble extractables and acid solubles continued to decrease slowly and AUR to increase. Solid-state C-13 NMR showed continuing slight declines in O- and di-O-alkyl C and increases in alkyl, methoxyl, aryl and carboxyl C. CIDET and other studies now clearly show that lignin is not selectively preserved, and that AUR is not a measure of foliar lignin as it includes components from condensed tannins and long-chain alkyl C. Interaction with soil minerals strongly enhances soil C stabilization, but what slows decomposition so much in organic horizons? The role of inherent "chemical recalcitrance" or possible formation of new covalent bonds is hotly debated in soil science, but increasingly complex or random molecular structures no doubt present greater challenges to enzymes. A relevant observation from soils and geochemistry is that decomposition results in a decline in individual compounds that can be identified from chemical analysis and a corresponding increase in the "molecularly uncharacterizable component" (MUC). Long-term declines in Ca, Mg, K

  19. WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.

    Science.gov (United States)

    Debnath, Avijit; Bhattacharjee, Nairita

    2018-05-01

    SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.

  20. A Dual Decomposition Approach to Partial Crosstalk Cancelation in a Multiuser DMT-xDSL Environment

    Directory of Open Access Journals (Sweden)

    Verlinden Jan

    2007-01-01

    Full Text Available In modern DSL systems, far-end crosstalk is a major source of performance degradation. Crosstalk cancelation schemes have been proposed to mitigate the effect of crosstalk. However, the complexity of crosstalk cancelation grows with the square of the number of lines in the binder. Fortunately, most of the crosstalk originates from a limited number of lines and, for DMT-based xDSL systems, on a limited number of tones. As a result, a fraction of the complexity of full crosstalk cancelation suffices to cancel most of the crosstalk. The challenge is then to determine which crosstalk to cancel on which tones, given a complexity constraint. This paper presents an algorithm based on a dual decomposition to optimally solve this problem. The proposed algorithm naturally incorporates rate constraints and the complexity of the algorithm compares favorably to a known resource allocation algorithm, where a multiuser extension is made to incorporate the rate constraints.

  1. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    Science.gov (United States)

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  2. Healthcare Expenditures Associated with Depression Among Individuals with Osteoarthritis: Post-Regression Linear Decomposition Approach.

    Science.gov (United States)

    Agarwal, Parul; Sambamoorthi, Usha

    2015-12-01

    Depression is common among individuals with osteoarthritis and leads to increased healthcare burden. The objective of this study was to examine excess total healthcare expenditures associated with depression among individuals with osteoarthritis in the US. Adults with self-reported osteoarthritis (n = 1881) were identified using data from the 2010 Medical Expenditure Panel Survey (MEPS). Among those with osteoarthritis, chi-square tests and ordinary least square regressions (OLS) were used to examine differences in healthcare expenditures between those with and without depression. Post-regression linear decomposition technique was used to estimate the relative contribution of different constructs of the Anderson's behavioral model, i.e., predisposing, enabling, need, personal healthcare practices, and external environment factors, to the excess expenditures associated with depression among individuals with osteoarthritis. All analysis accounted for the complex survey design of MEPS. Depression coexisted among 20.6 % of adults with osteoarthritis. The average total healthcare expenditures were $13,684 among adults with depression compared to $9284 among those without depression. Multivariable OLS regression revealed that adults with depression had 38.8 % higher healthcare expenditures (p regression linear decomposition analysis indicated that 50 % of differences in expenditures among adults with and without depression can be explained by differences in need factors. Among individuals with coexisting osteoarthritis and depression, excess healthcare expenditures associated with depression were mainly due to comorbid anxiety, chronic conditions and poor health status. These expenditures may potentially be reduced by providing timely intervention for need factors or by providing care under a collaborative care model.

  3. Understanding determinants of unequal distribution of stillbirth in Tehran, Iran: a concentration index decomposition approach.

    Science.gov (United States)

    Almasi-Hashiani, Amir; Sepidarkish, Mahdi; Safiri, Saeid; Khedmati Morasae, Esmaeil; Shadi, Yahya; Omani-Samani, Reza

    2017-05-17

    The present inquiry set to determine the economic inequality in history of stillbirth and understanding determinants of unequal distribution of stillbirth in Tehran, Iran. A population-based cross-sectional study was conducted on 5170 pregnancies in Tehran, Iran, since 2015. Principal component analysis (PCA) was applied to measure the asset-based economic status. Concentration index was used to measure socioeconomic inequality in stillbirth and then decomposed into its determinants. The concentration index and its 95% CI for stillbirth was -0.121 (-0.235 to -0.002). Decomposition of the concentration index showed that mother's education (50%), mother's occupation (30%), economic status (26%) and father's age (12%) had the highest positive contributions to measured inequality in stillbirth history in Tehran. Mother's age (17%) had the highest negative contribution to inequality. Stillbirth is unequally distributed among Iranian women and is mostly concentrated among low economic status people. Mother-related factors had the highest positive and negative contributions to inequality, highlighting specific interventions for mothers to redress inequality. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. A new approach for crude oil price analysis based on empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shou-Yang; Lai, K.K.

    2008-01-01

    The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)

  5. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  6. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Science.gov (United States)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  7. Tracking European Union CO2 emissions through LMDI (logarithmic-mean Divisia index) decomposition. The activity revaluation approach

    International Nuclear Information System (INIS)

    Fernández González, P.; Landajo, M.; Presno, M.J.

    2014-01-01

    Aggregate CO 2 emitted to the atmosphere from a given region could be determined by monitoring several distinctive components. In this paper we propose five decomposition factors: population, production per capita, fuel mix, carbonization and energy intensity. The latter is commonly used as a proxy for energy efficiency. The problem arises when defining this concept, as there is little consensus among authors on how to measure energy intensity (using either physical or monetary activity indicators). In this paper we analyse several measurement possibilities, presenting and developing a number of approaches based on the LMDI (logarithmic-mean Divisia index) methodology, to decompose changes in aggregate CO 2 emissions. The resulting methodologies are so-called MB (monetary based), IR (intensity refactorization) and AR (activity revaluation) approaches. Then, we apply these methodologies to analyse changes in carbon dioxide emissions in the EU (European Union) power sector, both as a whole and at country level. Our findings show the strong impact of changes in the energy mix factor on aggregate CO 2 emission levels, although a number of differences among countries are detected which lead to specific environmental recommendations. - Highlights: • New Divisia-based decomposition analysis removing price influence is presented. • We apply refined methodologies to decompose changes in CO 2 emissions in the EU (European Union). • Changes in fuel mix appear as the main driving force in CO 2 emissions reduction. • GDPpc growth becomes a direct contributor to emissions drop, especially in Western EU. • Innovation and technical change: less helpful tools when eliminating the price effect

  8. Application of the Gini correlation coefficient to infer regulatory relationships in transcriptome analysis.

    Science.gov (United States)

    Ma, Chuang; Wang, Xiangfeng

    2012-09-01

    One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey's biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses.

  9. Application of the Gini Correlation Coefficient to Infer Regulatory Relationships in Transcriptome Analysis[W][OA

    Science.gov (United States)

    Ma, Chuang; Wang, Xiangfeng

    2012-01-01

    One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey’s biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses. PMID:22797655

  10. Distribution of physicians and hospital beds based on Gini coefficient and Lorenz curve: A national survey

    Directory of Open Access Journals (Sweden)

    Satar Rezaei

    2016-06-01

    Full Text Available Introduction: Inequality is prevalent in all sectors, particularly in distribution of and access to resources in the health sector. The aim of current study was to investigate the distribution of physicians and hospital beds in Iran in 2001, 2006 and 2011. Methods: This retrospective, cross-sectional study evaluated the distribution of physicians and hospital beds in 2001, 2006 and 2011 using Gini coefficient and Lorenz curve. The required data, including the number of physicians (general practitioners and specialists, number of hospital beds and number of hospitalized patients were obtained from the statistical yearbook of Iranian Statistical Center (ISC. The data analysis was performed by DASP software. Results: The Gini Coefficients for physicians and hospital beds based on population in 2001 were 0.19 and 0.16, and based on hospitalized patients, were 0.48 and 0.37, respectively. In 2006, these values were found to be 0.18 and 0.15 based on population, and 0.21 and 0.21 based on hospitalized patients, respectively. In 2011, however, the Gini coefficients were reported to be 0.16 and 0.13 based on population, and 0.47 and 0.37 based on hospitalized patients, respectively. Although distribution status had improved in 2011compared with 2001 in terms of population and number of hospitalized patients, there was more inequality in distribution based on the number of hospitalized patients than based on population. Conclusion: This study indicated that inequality in distribution of physicians and hospital beds was declined in 2011 compared with 2001. This distribution was based on the population, so it is suggested that, in allocation of resource, the health policymakers consider such need indices as the pattern of diseases and illness-prone areas, number of inpatients, and mortality.

  11. An Interpretation of the Gini Coefficient in a Stiglitz Two-Type Optimal Tax Problem

    DEFF Research Database (Denmark)

    Rasmussen, Bo Sandemann

    2014-01-01

    In a two-type Stiglitz (1982) model of optimal non-linear taxation it is shown that when the utility function relating to consumption is logaritmic the shadow price of the incentive constraint relating to the optimal tax problem exactly equals the Gini coefficient of the second-best optimal income...... distribution of a utilitarian government. In this sense the optimal degree of income redistribution is determined by the severity of the incentive problem facing the policy-maker. Extensions of the benchmark model to allow for more general functional forms of the utility function and for more than two types...

  12. Measuring party nationalisation: A new Gini-based indicator that corrects for the number of units

    DEFF Research Database (Denmark)

    Bochsler, Daniel

    2010-01-01

    The study of the territorial distribution of votes in elections has become an important field of the political party research in recent years. Quantitative studies on the homogeneity of votes and turnout employ different indicators of territorial variance, but despite important progresses...... in measurement, many of them are sensitive to size and number of political parties or electoral districts. This article proposes a new 'standardised party nationalisation score', which is based on the Gini coefficient of inequalities in distribution. Different from previous indicators, the standardised party...

  13. A solution approach to the ROADEF/EURO 2010 challenge based on Benders' Decomposition

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    them satisfy the constraints not part of the mixed integer program. A number of experiments are performed on the available benchmark instances. These experiments show that the approach is competitive on the smaller instances, but not for the larger ones. We believe the exact approach gives insight...... into the problem and additionally makes it possible to find lower bounds on the problem, which is typically not the case for the competing heuristics....

  14. Gleer: A Novel Gini-Based Energy Balancing Scheme for Mobile Botnet Retopology

    Directory of Open Access Journals (Sweden)

    Yichuan Wang

    2018-01-01

    Full Text Available Mobile botnet has recently evolved due to the rapid growth of smartphone technologies. Unlike legacy botnets, mobile devices are characterized by limited power capacity, calculation capabilities, and wide communication methods. As such, the logical topology structure and communication mode have to be redesigned for mobile botnets to narrow energy gap and lower the reduction speed of nodes. In this paper, we try to design a novel Gini-based energy balancing scheme (Gleer for the atomic network, which is a fundamental component of the heterogeneous multilayer mobile botnet. Firstly, for each operation cycle, we utilize the dynamic energy threshold to categorize atomic network into two groups. Then, the Gini coefficient is introduced to estimate botnet energy gap and to regulate the probability for each node to be picked as a region C&C server. Experimental results indicate that our proposed method can effectively prolong the botnet lifetime and prevent the reduction of network size. Meanwhile, the stealthiness of botnet with Gleer scheme is analyzed from users’ perspective, and results show that the proposed scheme works well in the reduction of user’ detection awareness.

  15. Spatial and Temporal Analysis of Rainfall Concentration Using the Gini Index and PCI

    Directory of Open Access Journals (Sweden)

    Claudia Sangüesa

    2018-01-01

    Full Text Available This study aims to determine if there is variation in precipitation concentrations in Chile. We analyzed daily and monthly records from 89 pluviometric stations in the period 1970–2016 and distributed between 29°12′ S and 39°30′ S. This area was divided into two climatic zones: arid–semiarid and humid–subhumid. For each station, the Gini coefficient or Gini Index (GI, the precipitation concentration index (PCI, and the maximum annual precipitation intensity in a 24-h duration were calculated. These series of annual values were analyzed with the Mann–Kendall test with 5% error. Overall, it was noted that positive trends in the GI are present in both areas, although most were not found to be significant. In the case of PCI, the presence of positive trends is only present in the arid–semiarid zone; in the humid–subhumid zone, negative trends were mostly observed, although none of them were significant. Although no significant changes in all indices are evident, the particular case of the GI in the humid–subhumid zone stands out, where mostly positive trends were found (91.1%, of which 35.6% were significant. This would indicate that precipitation is more likely to be concentrated on a daily scale.

  16. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    Science.gov (United States)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  17. CONFAC Decomposition Approach to Blind Identification of Underdetermined Mixtures Based on Generating Function Derivatives

    NARCIS (Netherlands)

    de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre

    This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the

  18. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  19. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    Science.gov (United States)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  20. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  1. Network-constrained AC unit commitment under uncertainty: A Benders' decomposition approach

    DEFF Research Database (Denmark)

    Nasri, Amin; Kazempour, Seyyedjalal; Conejo, Antonio J.

    2015-01-01

    . The proposed model is formulated as a two-stage stochastic programming problem, whose first-stage refers to the day-ahead market, and whose second-stage represents real-time operation. The proposed Benders’ approach allows decomposing the original problem, which is mixed-integer nonlinear and generally...... intractable, into a mixed-integer linear master problem and a set of nonlinear, but continuous subproblems, one per scenario. In addition, to temporally decompose the proposed ac unit commitment problem, a heuristic technique is used to relax the inter-temporal ramping constraints of the generating units...

  2. Tailor-made Design of Chemical Blends using Decomposition-based Computer-aided Approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Manan, Zainuddin Abd.; Gernaey, Krist

    (properties). In this way, first the systematic computer-aided technique establishes the search space, and then narrows it down in subsequent steps until a small number of feasible and promising candidates remain and then experimental work may be conducted to verify if any or all the candidates satisfy......Computer aided technique is an efficient approach to solve chemical product design problems such as design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product attributes...... is decomposed into two stages. The first stage investigates the mixture stability where all unstable mixtures are eliminated and the stable blend candidates are retained for further testing. In the second stage, the blend candidates have to satisfy a set of target properties that are ranked according...

  3. Towards Effective Network Intrusion Detection: A Hybrid Model Integrating Gini Index and GBDT with PSO

    Directory of Open Access Journals (Sweden)

    Longjie Li

    2018-01-01

    Full Text Available In order to protect computing systems from malicious attacks, network intrusion detection systems have become an important part in the security infrastructure. Recently, hybrid models that integrating several machine learning techniques have captured more attention of researchers. In this paper, a novel hybrid model was proposed with the purpose of detecting network intrusion effectively. In the proposed model, Gini index is used to select the optimal subset of features, the gradient boosted decision tree (GBDT algorithm is adopted to detect network attacks, and the particle swarm optimization (PSO algorithm is utilized to optimize the parameters of GBDT. The performance of the proposed model is experimentally evaluated in terms of accuracy, detection rate, precision, F1-score, and false alarm rate using the NSL-KDD dataset. Experimental results show that the proposed model is superior to the compared methods.

  4. Inequality in societies, academic institutions and science journals: Gini and k-indices

    Science.gov (United States)

    Ghosh, Asim; Chattopadhyay, Nachiketa; Chakrabarti, Bikas K.

    2014-09-01

    Social inequality is traditionally measured by the Gini-index (g). The g-index takes values from 0 to 1 where g=0 represents complete equality and g=1 represents complete inequality. Most of the estimates of the income or wealth data indicate the g value to be widely dispersed across the countries of the world: g values typically range from 0.30 to 0.65 at a particular time (year). We estimated similarly the Gini-index for the citations earned by the yearly publications of various academic institutions and the science journals. The ISI web of science data suggests remarkably strong inequality and universality (g=0.70±0.07) across all the universities and institutions of the world, while for the journals we find g=0.65±0.15 for any typical year. We define a new inequality measure, namely the k-index, saying that the cumulative income or citations of (1-k) fraction of people or papers exceed those earned by the fraction (k) of the people or publications respectively. We find, while the k-index value for income ranges from 0.60 to 0.75 for income distributions across the world, it has a value around 0.75±0.05 for different universities and institutions across the world and around 0.77±0.10 for the science journals. Apart from above indices, we also analyze the same institution and journal citation data by measuring Pietra index and median index.

  5. Decomposing the Gini Inequality Index: An Expanded Solution with Survey Data Applied to Analyze Gender Income Inequality

    Science.gov (United States)

    Larraz, Beatriz

    2015-01-01

    The aim of this article is to propose a new breakdown of the Gini inequality ratio into three components ("within-group" inequality, "between-group" inequality, and intensity of "transvariation" between groups to the total inequality index). The between-group inequality concept computes all the differences in salaries…

  6. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Science.gov (United States)

    Ultsch, Alfred; Lötsch, Jörn

    2017-01-01

    The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  7. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    KAUST Repository

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2017-01-01

    of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC

  8. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Directory of Open Access Journals (Sweden)

    Alfred Ultsch

    Full Text Available The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  9. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  10. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  11. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  13. Understanding determinants of socioeconomic inequality in mental health in Iran's capital, Tehran: a concentration index decomposition approach.

    Science.gov (United States)

    Morasae, Esmaeil Khedmati; Forouzan, Ameneh Setareh; Majdzadeh, Reza; Asadi-Lari, Mohsen; Noorbala, Ahmad Ali; Hosseinpoor, Ahmad Reza

    2012-03-26

    Mental health is of special importance regarding socioeconomic inequalities in health. On the one hand, mental health status mediates the relationship between economic inequality and health; on the other hand, mental health as an "end state" is affected by social factors and socioeconomic inequality. In spite of this, in examining socioeconomic inequalities in health, mental health has attracted less attention than physical health. As a first attempt in Iran, the objectives of this paper were to measure socioeconomic inequality in mental health, and then to untangle and quantify the contributions of potential determinants of mental health to the measured socioeconomic inequality. In a cross-sectional observational study, mental health data were taken from an Urban Health Equity Assessment and Response Tool (Urban HEART) survey, conducted on 22 300 Tehran households in 2007 and covering people aged 15 and above. Principal component analysis was used to measure the economic status of households. As a measure of socioeconomic inequality, a concentration index of mental health was applied and decomposed into its determinants. The overall concentration index of mental health in Tehran was -0.0673 (95% CI = -0.070 - -0.057). Decomposition of the concentration index revealed that economic status made the largest contribution (44.7%) to socioeconomic inequality in mental health. Educational status (13.4%), age group (13.1%), district of residence (12.5%) and employment status (6.5%) also proved further important contributors to the inequality. Socioeconomic inequalities exist in mental health status in Iran's capital, Tehran. Since the root of this avoidable inequality is in sectors outside the health system, a holistic mental health policy approach which includes social and economic determinants should be adopted to redress the inequitable distribution of mental health.

  14. An Efficient Approach for Pixel Decomposition to Increase the Spatial Resolution of Land Surface Temperature Images from MODIS Thermal Infrared Band Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2014-12-01

    Full Text Available Land surface temperature (LST images retrieved from the thermal infrared (TIR band data of Moderate Resolution Imaging Spectroradiometer (MODIS have much lower spatial resolution than the MODIS visible and near-infrared (VNIR band data. The coarse pixel scale of MODIS LST images (1000 m under nadir have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD. Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI and building index (NDBI, reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER with much higher spatial resolution than MODIS data was on-board the same platform (Terra as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error

  15. An efficient approach for pixel decomposition to increase the spatial resolution of land surface temperature images from MODIS thermal infrared band data.

    Science.gov (United States)

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2014-12-25

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2

  16. Synthesis of SiOx@CdS core–shell nanoparticles by simple thermal decomposition approach and studies on their optical properties

    International Nuclear Information System (INIS)

    Kandula, Syam; Jeevanandam, P.

    2014-01-01

    Highlights: • SiO x @CdS nanoparticles have been synthesized by a novel thermal decomposition approach. • The method is easy and there is no need for surface functionalization of silica core. • SiO x @CdS nanoparticles show different optical properties compared to pure CdS. - Abstract: SiO x @CdS core–shell nanoparticles have been synthesized by a simple thermal decomposition approach. The synthesis involves two steps. In the first step, SiO x spheres were synthesized using StÖber’s process. Then, cadmium sulfide nanoparticles were deposited on the SiO x spheres by the thermal decomposition of cadmium acetate and thiourea in ethylene glycol at 180 °C. Electron microscopy results show uniform deposition of cadmium sulfide nanoparticles on the surface of SiO x spheres. Electron diffraction patterns confirm crystalline nature of the cadmium sulfide nanoparticles on silica and high resolution transmission electron microscopy images clearly show the lattice fringes due to cubic cadmium sulfide. Diffuse reflectance spectroscopy results show blue shift of band gap absorption of SiO x @CdS core–shell nanoparticles with respect to bulk cadmium sulfide and this is attributed to quantum size effect. Photoluminescence results show enhancement in intensity of band edge emission and weaker emission due to surface defects in SiO x @CdS core–shell nanoparticles compared to pure cadmium sulfide nanoparticles

  17. Drift and transmission FT-IR spectroscopy of forest soils: an approach to determine decomposition processes of forest litter

    International Nuclear Information System (INIS)

    Haberhauer, G.; Gerzabek, M.H.

    1999-06-01

    A method is described to characterize organic soil layers using Fourier transformed infrared spectroscopy. The applicability of FT-IR, either dispersive or transmission, to investigate decomposition processes of spruce litter in soil originating from three different forest sites in two climatic regions was studied. Spectral information of transmission and diffuse reflection FT-IR spectra was analyzed and compared. For data evaluation Kubelka Munk (KM) transformation was applied to the DRIFT spectra. Sample preparation for DRIFT is simpler and less time consuming in comparison to transmission FT-IR, which uses KBr pellets. A variety of bands characteristics of molecular structures and functional groups has been identified for these complex samples. Analysis of both transmission FT-IR and DRIFT, showed that the intensity of distinct bands is a measure of the decomposition of forest litter. Interferences due to water adsorption spectra were reduced by DRIFT measurement in comparison to transmission FT-IR spectroscopy. However, data analysis revealed that intensity changes of several bands of DRIFT and transmission FT-IR were significantly correlated with soil horizons. The application of regression models enables identification and differentiation of organic forest soil horizons and allows to determine the decomposition status of soil organic matter in distinct layers. On the basis of the data presented in this study, it may be concluded that FT-IR spectroscopy is a powerful tool for the investigation of decomposition dynamics in forest soils. (author)

  18. The Analysis of Regional Disparities in Romania with Gini/Struck Coefficients of Concentration

    Directory of Open Access Journals (Sweden)

    DANIELA ANTONESCU

    2010-12-01

    Full Text Available A key objective of regional development policy is to reduce disparities between regions and to ensure a relatively balanced level of development. To achieve this goal studies and social and economic analysis based on certain techniques and methods of evaluation are necessary. In scientific literature, there are plenty of models that can be applied to assess regional disparities. One of the methods commonly used in practice is related to the calculation and analysis of the degree of concentration/diversification of activities within a region. The increase or decrease in the degree of concentration of certain activities or areas of activity in a region provides information on: - the level of overall economic development; - economic development and growth rate; - the specific features of the region, the potential, local traditions, etc. The expert analysis indicate that, in a high level of overall development or a sustained economic growth rate, there are favorable conditions for economic activities to locate in any region, so they are relatively uniformly distributed throughout the country.Knowing the degree of concentration and also the influence factors are useful in making decisions and setting regional policy measures.This article proposes a synthetic analysis of the development level of regions in Romania with the concentration/diversification model (Gini/Struck coefficients, based on the existing key statistical indicators.

  19. The Methane to Carbon Dioxide Ratio Produced during Peatland Decomposition and a Simple Approach for Distinguishing This Ratio

    Science.gov (United States)

    Chanton, J.; Hodgkins, S. B.; Cooper, W. T.; Glaser, P. H.; Corbett, J. E.; Crill, P. M.; Saleska, S. R.; Rich, V. I.; Holmes, B.; Hines, M. E.; Tfaily, M.; Kostka, J. E.

    2014-12-01

    Peatland organic matter is cellulose-like with an oxidation state of approximately zero. When this material decomposes by fermentation, stoichiometry dictates that CH4 and CO2 should be produced in a ratio approaching one. While this is generally the case in temperate zones, this production ratio is often departed from in boreal peatlands, where the ratio of belowground CH4/CO2 production varies between 0.1 and 1, indicating CO2 production by a mechanism in addition to fermentation. The in situ CO2/CH4 production ratio may be ascertained by analysis of the 13C isotopic composition of these products, because CO2 production unaccompanied by methane production produces CO2 with an isotopic composition similar to the parent organic matter while methanogenesis produces 13C depleted methane and 13C enriched CO2. The 13C enrichment in the subsurface CO2 pool is directly related to the amount of if formed from methane production and the isotopic composition of the methane itself. Excess CO2 production is associated with more acidic conditions, Sphagnum vegetation, high and low latitudes, methane production dominated by hydrogenotrophic methane production, 13C depleted methane, and generally, more nutrient depleted conditions. Three theories have been offered to explain these observations— 1) inhibition of acetate utilization, acetate build-up and diffusion to the surface and eventual aerobic oxidation, 2) the use of humic acids as electron acceptors, and the 3) utilization of organic oxygen to produce CO2. In support of #3, we find that 13C-NMR, Fourier transform infrared (FT IR) spectroscopy, and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR-MS) clearly show the evolution of polysaccharides and cellulose towards more decomposed humified alkyl compounds stripped of organic oxygen utilized to form CO2. Such decomposition results in more negative carbon oxidation states varying from -1 to -2. Coincident with this reduction in oxidation state, is the

  20. Fault Severity Evaluation and Improvement Design for Mechanical Systems Using the Fault Injection Technique and Gini Concordance Measure

    Directory of Open Access Journals (Sweden)

    Jianing Wu

    2014-01-01

    Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

  1. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    Science.gov (United States)

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability

  2. A solution of nonlinear equation for the gravity wave spectra from Adomian decomposition method: a first approach

    Directory of Open Access Journals (Sweden)

    Antonio Gledson Goulart

    2013-12-01

    Full Text Available In this paper, the equation for the gravity wave spectra in mean atmosphere is analytically solved without linearization by the Adomian decomposition method. As a consequence, the nonlinear nature of problem is preserved and the errors found in the results are only due to the parameterization. The results, with the parameterization applied in the simulations, indicate that the linear solution of the equation is a good approximation only for heights shorter than ten kilometers, because the linearization the equation leads to a solution that does not correctly describe the kinetic energy spectra.

  3. Revisiting the Granger Causality Relationship between Energy Consumption and Economic Growth in China: A Multi-Timescale Decomposition Approach

    Directory of Open Access Journals (Sweden)

    Lei Jiang

    2017-12-01

    Full Text Available The past four decades have witnessed rapid growth in the rate of energy consumption in China. A great deal of energy consumption has led to two major issues. One is energy shortages and the other is environmental pollution caused by fossil fuel combustion. Since energy saving plays a substantial role in addressing both issues, it is of vital importance to study the intrinsic characteristics of energy consumption and its relationship with economic growth. The topic of the nexus between energy consumption and economic growth has been hotly debated for years. However, conflicting conclusions have been drawn. In this paper, we provide a novel insight into the characteristics of the growth rate of energy consumption in China from a multi-timescale perspective by means of adaptive time-frequency data analysis; namely, the ensemble empirical mode decomposition method, which is suitable for the analysis of non-linear time series. Decomposition led to four intrinsic mode function (IMF components and a trend component with different periods. Then, we repeated the same procedure for the growth rate of China’s GDP and obtained four similar IMF components and a trend component. In the second stage, we performed the Granger causality test. The results demonstrated that, in the short run, there was a bidirectional causality relationship between economic growth and energy consumption, and in the long run a unidirectional relationship running from economic growth to energy consumption.

  4. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  5. Socioeconomic inequality in abdominal obesity among older people in Purworejo District, Central Java, Indonesia - a decomposition analysis approach.

    Science.gov (United States)

    Pujilestari, Cahya Utamie; Nyström, Lennarth; Norberg, Margareta; Weinehall, Lars; Hakimi, Mohammad; Ng, Nawi

    2017-12-12

    Obesity has become a global health challenge as its prevalence has increased globally in recent decades. Studies in high-income countries have shown that obesity is more prevalent among the poor. In contrast, obesity is more prevalent among the rich in low- and middle-income countries, hence requiring different focal points to design public health policies in the latter contexts. We examined socioeconomic inequalities in abdominal obesity in Purworejo District, Central Java, Indonesia and identified factors contributing to the inequalities. We utilised data from the WHO-INDEPTH Study on global AGEing and adult health (WHO-INDEPTH SAGE) conducted in the Purworejo Health and Demographic Surveillance System (HDSS) in Purworejo District, Indonesia in 2010. The study included 14,235 individuals aged 50 years and older. Inequalities in abdominal obesity across wealth groups were assessed separately for men and women using concentration indexes. Decomposition analysis was conducted to assess the determinants of socioeconomic inequalities in abdominal obesity. Abdominal obesity was five-fold more prevalent among women than in men (30% vs. 6.1%; p < 0.001). The concentration index (CI) analysis showed that socioeconomic inequalities in abdominal obesity were less prominent among women (CI = 0.26, SE = 0.02, p < 0.001) compared to men (CI = 0.49, SE = 0.04, p < 0.001). Decomposition analysis showed that physical labour was the major determinant of socioeconomic inequalities in abdominal obesity among men, explaining 47% of the inequalities, followed by poor socioeconomic status (31%), ≤ 6 years of education (15%) and current smoking (11%). The three major determinants of socioeconomic inequalities in abdominal obesity among women were poor socio-economic status (48%), physical labour (17%) and no formal education (16%). Abdominal obesity was more prevalent among older women in a rural Indonesian setting. Socioeconomic inequality in

  6. A novel thermal decomposition approach to synthesize hydroxyapatite-silver nanocomposites and their antibacterial action against GFP-expressing antibiotic resistant E. coli.

    Science.gov (United States)

    Sahni, Geetika; Gopinath, P; Jeevanandam, P

    2013-03-01

    A novel thermal decomposition approach to synthesize hydroxyapatite-silver (Hap-Ag) nanocomposites has been reported. The nanocomposites were characterized by X-ray diffraction, field emission scanning electron microscopy coupled with energy dispersive X-ray analysis, transmission electron microscopy and diffuse reflectance spectroscopy techniques. Antibacterial activity studies for the nanocomposites were explored using a new rapid access method employing recombinant green fluorescent protein (GFP) expressing antibiotic resistant Escherichia coli (E. coli). The antibacterial activity was studied by visual turbidity analysis, optical density analysis, fluorescence spectroscopy and microscopy. The mechanism of bactericidal action of the nanocomposites on E. coli was investigated using atomic force microscopy, and TEM analysis. Excellent bactericidal activity at low concentration of the nanocomposites was observed which may allow their use in the production of microbial contamination free prosthetics. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Losses of soil organic carbon by converting tropical forest to plantations: Assessment of erosion and decomposition by new δ13C approach

    Science.gov (United States)

    Guillaume, Thomas; Muhammad, Damris; Kuzyakov, Yakov

    2015-04-01

    Indonesia lost more tropical forest than all of Brazil in 2012, mainly driven by the rubber, oil palm and timber industries. Nonetheless, the effects of converting forest to oil palm and rubber plantations on soil organic carbon (SOC) stocks remain unclear. We analyzed SOC losses after lowland rainforest conversion to oil palm, intensive rubber and extensive rubber plantations in Jambi province on Sumatra Island. We developed and applied a new δ13C based approach to assess and separate two processes: 1) erosion and 2) decomposition. Carbon contents in the Ah horizon under oil palm and rubber plantations were strongly reduced: up to 70% and 62%, respectively. The decrease was lower under extensive rubber plantations (41%). The C content in the subsoil was similar in the forest and the plantations. We therefore assumed that a shift to higher δ13C values in the subsoil of the plantations corresponds to the losses of the upper soil layer by erosion. Erosion was estimated by comparing the δ13C profiles in the undisturbed soils under forest with the disturbed soils under plantations. The estimated erosion was the strongest in oil palm (35±8 cm) and rubber (33±10 cm) plantations. The 13C enrichment of SOC used as a proxy of its turnover indicates a decrease of SOC decomposition rate in the Ah horizon under oil palm plantations after forest conversion. SOC availability, measured by microbial respiration rate and Fourier Transformed Infrared Spectroscopy, was lower under oil palm plantations. Despite similar trends in C losses and erosion in intensive plantations, our results indicate that microorganisms in oil palm plantations mineralized mainly the old C stabilized prior to conversion, whereas microorganisms under rubber plantations mineralized the fresh C from the litter, leaving the old C pool mainly untouched. Based on the lack of C input from litter, we expect further losses of SOC under oil palm plantations, which therefore are a less sustainable land

  8. Decomposing the causes of socioeconomic-related health inequality among urban and rural populations in China: a new decomposition approach.

    Science.gov (United States)

    Cai, Jiaoli; Coyte, Peter C; Zhao, Hongzhong

    2017-07-18

    In recent decades, China has experienced tremendous economic growth and also witnessed growing socioeconomic-related health inequality. The study aims to explore the potential causes of socioeconomic-related health inequality in urban and rural areas of China over the past two decades. This study used six waves of the China Health and Nutrition Survey (CHNS) from 1991 to 2006. The recentered influence function (RIF) regression decomposition method was employed to decompose socioeconomic-related health inequality in China. Health status was derived from self-rated health (SRH) scores. The analyses were conducted on urban and rural samples separately. We found that the average level of health status declined from 1989 to 2006 for both urban and rural populations. Average health scores were greater for the rural population compared with those for the urban population. We also found that there exists pro-rich health inequality in China. While income and secondary education were the main factors to reduce health inequality, older people, unhealthy lifestyles and a poor home environment increased inequality. Health insurance had the opposite effects on health inequality for urban and rural populations, resulting in lower inequality for urban populations and higher inequality for their rural counterparts. These findings suggest that an effective way to reduce socioeconomic-related health inequality is not only to increase income and improve access to health care services, but also to focus on improvements in the lifestyles and the home environment. Specifically, for rural populations, it is particularly important to improve the design of health insurance and implement a more comprehensive insurance package that can effectively target the rural poor. Moreover, it is necessary to comprehensively promote the flush toilets and tap water in rural areas. For urban populations, in addition to promoting universal secondary education, healthy lifestyles should be promoted

  9. Progressivity of personal income tax in Croatia: decomposition of tax base and rate effects

    Directory of Open Access Journals (Sweden)

    Ivica Urban

    2006-09-01

    Full Text Available This paper presents progressivity breakdowns for Croatian personal income tax (henceforth PIT in 1997 and 2004. The decompositions reveal how the elements of the system – tax schedule, allowances, deductions and credits – contribute to the achievement of progressivity, over the quantiles of pre-tax income distribution. Through the use of ‘single parameter’ Gini indices, the social decision maker’s (henceforth SDM relatively more or less favorable inclination toward taxpayers in the lower tails of pre-tax income distribution is accounted for. Simulations are undertaken to show how the introduction of a flat-rate system would affect progressivity.

  10. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  11. Uma Abordagem para a Decomposição de Processos de Nego̿cio para Execução em Nuvens Computacionais (in Portugese; An approach to business processes decomposition for cloud deployment)

    NARCIS (Netherlands)

    Povoa, Lucas Venezian; Lopes de Souza, Wanderley; Ferreira Pires, Luis; Duipmans, Evert F.; do Prado, Antonio Francisco

    Due to safety requirements, certain data or activities of a business process should be kept within the user premises, while others can be allocated to a cloud environment. This paper presents a generic approach to business processes decomposition taking into account the allocation of activities and

  12. Chinese Gini Coefficient from 2005 to 2012, Based on 20 Grouped Income Data Sets of Urban and Rural Residents

    Directory of Open Access Journals (Sweden)

    Jiandong Chen

    2015-01-01

    Full Text Available Data insufficiency has become the primary factor affecting research on income disparity in China. To resolve this issue, this paper explores Chinese income distribution and income inequality using distribution functions. First, it examines 20 sets of grouped data on family income between 2005 and 2012 by the China Yearbook of Household Surveys, 2013, and compares the fitting effects of eight distribution functions. The results show that the generalized beta distribution of the second kind has a high fitting to the income distribution of urban and rural residents in China. Next, these results are used to calculate the Chinese Gini ratio, which is then compared with the findings of relevant studies. Finally, this paper discusses the influence of urbanization on income inequality in China and suggests that accelerating urbanization can play an important role in narrowing the income gap of Chinese residents.

  13. A Fusion Approach to Feature Extraction by Wavelet Decomposition and Principal Component Analysis in Transient Signal Processing of SAW Odor Sensor Array

    Directory of Open Access Journals (Sweden)

    Prashant SINGH

    2011-03-01

    Full Text Available This paper presents theoretical analysis of a new approach for development of surface acoustic wave (SAW sensor array based odor recognition system. The construction of sensor array employs a single polymer interface for selective sorption of odorant chemicals in vapor phase. The individual sensors are however coated with different thicknesses. The idea of sensor coating thickness variation is for terminating solvation and diffusion kinetics of vapors into polymer up to different stages of equilibration on different sensors. This is expected to generate diversity in information content of the sensors transient. The analysis is based on wavelet decomposition of transient signals. The single sensor transients have been used earlier for generating odor identity signatures based on wavelet approximation coefficients. In the present work, however, we exploit variability in diffusion kinetics due to polymer thicknesses for making odor signatures. This is done by fusion of the wavelet coefficients from different sensors in the array, and then applying the principal component analysis. We find that the present approach substantially enhances the vapor class separability in feature space. The validation is done by generating synthetic sensor array data based on well-established SAW sensor theory.

  14. A novel approach for fault detection and classification of the thermocouple sensor in Nuclear Power Plant using Singular Value Decomposition and Symbolic Dynamic Filter

    International Nuclear Information System (INIS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-01-01

    Highlights: • A novel approach to classify the fault pattern using data-driven methods. • Application of robust reconstruction method (SVD) to identify the faulty sensor. • Analysing fault pattern for plenty of sensors using SDF with less time complexity. • An efficient data-driven model is designed to the false and missed alarms. - Abstract: A mathematical model with two layers is developed using data-driven methods for thermocouple sensor fault detection and classification in Nuclear Power Plants (NPP). The Singular Value Decomposition (SVD) based method is applied to detect the faulty sensor from a data set of all sensors, at the first layer. In the second layer, the Symbolic Dynamic Filter (SDF) is employed to classify the fault pattern. If SVD detects any false fault, it is also re-evaluated by the SDF, i.e., the model has two layers of checking to balance the false alarms. The proposed fault detection and classification method is compared with the Principal Component Analysis. Two case studies are taken from Fast Breeder Test Reactor (FBTR) to prove the efficiency of the proposed method.

  15. Light-quarkonium spectra and orbital-angular-momentum decomposition in a Bethe-Salpeter-equation approach

    Energy Technology Data Exchange (ETDEWEB)

    Hilger, T.; Krassnigg, A. [University of Graz, NAWI Graz, Institute of Physics, Graz (Austria); Gomez-Rocha, M. [ECT*, Villazzano, Trento (Italy)

    2017-09-15

    We investigate the light-quarkonium spectrum using a covariant Dyson-Schwinger-Bethe-Salpeter-equation approach to QCD. We discuss splittings among as well as orbital angular momentum properties of various states in detail and analyze common features of mass splittings with regard to properties of the effective interaction. In particular, we predict the mass of anti ss exotic 1{sup -+} states, and identify orbital angular momentum content in the excitations of the ρ meson. Comparing our covariant model results, the ρ and its second excitation being predominantly S-wave, the first excitation being predominantly D-wave, to corresponding conflicting lattice-QCD studies, we investigate the pion-mass dependence of the orbital-angular-momentum assignment and find a crossing at a scale of m{sub π} ∝ 1.4 GeV. If this crossing turns out to be a feature of the spectrum generated by lattice-QCD studies as well, it may reconcile the different results, since they have been obtained at different values of m{sub π}. (orig.)

  16. Multi-domain/multi-method numerical approach for neutron transport equation; Couplage de methodes et decomposition de domaine pour la resolution de l'equation du transport des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E

    2004-12-15

    A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.

  17. An inspection on the Gini coefficient of the budget educational public expenditure per student for China's basic education

    Institute of Scientific and Technical Information of China (English)

    Yang Yingxiu

    2006-01-01

    Using statistical data on the implementing conditions Of China's educational expenditure pubhshed by the state.this paper studies the Gini coefficient of the budget educational pubic expenditure per student in order to examine the concentration degree of the educational expenditure for China's basic education and analyze its balanced development condition.As the research shows,China's basic education is undergoing an unbalanced development due to diversified factors,which is mainly reflected as follows:firstly.the budget educational pubic expenditure presents a four-tiered appearance of the strong,the less strong,the less weak and the weak,which lead to a great discrepancy between the two opposing extremes:secondly,the compulsory education in rural areas is still confronted with great difficulties;thirdly.the general senior secondary education is loaded with the crisis of unbalance.Therefore,it is necessary to construct a balanced development policy framework of the basic education and Pay close aaention to the benefit and effectiveness of the educational input.In addition.it is also important to clearly stipulate the criteflon ofthe government's educational allocation and to support the disadvantaged areas in order to promote the baianced development of the basic education.

  18. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    KAUST Repository

    Tamamitsu, Miu

    2017-08-27

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  19. Erbium hydride decomposition kinetics.

    Energy Technology Data Exchange (ETDEWEB)

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  20. Prokaryotic regulatory systems biology: Common principles governing the functional architectures of Bacillus subtilis and Escherichia coli unveiled by the natural decomposition approach.

    Science.gov (United States)

    Freyre-González, Julio A; Treviño-Quintanilla, Luis G; Valtierra-Gutiérrez, Ilse A; Gutiérrez-Ríos, Rosa María; Alonso-Pavón, José A

    2012-10-31

    Escherichia coli and Bacillus subtilis are two of the best-studied prokaryotic model organisms. Previous analyses of their transcriptional regulatory networks have shown that they exhibit high plasticity during evolution and suggested that both converge to scale-free-like structures. Nevertheless, beyond this suggestion, no analyses have been carried out to identify the common systems-level components and principles governing these organisms. Here we show that these two phylogenetically distant organisms follow a set of common novel biologically consistent systems principles revealed by the mathematically and biologically founded natural decomposition approach. The discovered common functional architecture is a diamond-shaped, matryoshka-like, three-layer (coordination, processing, and integration) hierarchy exhibiting feedback, which is shaped by four systems-level components: global transcription factors (global TFs), locally autonomous modules, basal machinery and intermodular genes. The first mathematical criterion to identify global TFs, the κ-value, was reassessed on B. subtilis and confirmed its high predictive power by identifying all the previously reported, plus three potential, master regulators and eight sigma factors. The functionally conserved cores of modules, basal cell machinery, and a set of non-orthologous common physiological global responses were identified via both orthologous genes and non-orthologous conserved functions. This study reveals novel common systems principles maintained between two phylogenetically distant organisms and provides a comparison of their lifestyle adaptations. Our results shed new light on the systems-level principles and the fundamental functions required by bacteria to sustain life. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  2. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  3. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  4. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  9. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  10. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  11. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  12. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  13. A framework for bootstrapping morphological decomposition

    CSIR Research Space (South Africa)

    Joubert, LJ

    2004-11-01

    Full Text Available The need for a bootstrapping approach to the morphological decomposition of words in agglutinative languages such as isiZulu is motivated, and the complexities of such an approach are described. The authors then introduce a generic framework which...

  14. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  15. A green approach towards adoption of chemical reaction model on 2,5-dimethyl-2,5-di-(tert-butylperoxy)hexane decomposition by differential isoconversional kinetic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Das, Mitali; Shu, Chi-Min, E-mail: shucm@yuntech.edu.tw

    2016-01-15

    Highlights: • Thermally degraded DBPH products are identified. • An appropriate mathematical model was selected for decomposition study. • Differential isoconversional analysis was performed to obtain kinetic parameters. • Simulation on thermal analysis model was conducted for the best storage conditions. - Abstract: This study investigated the thermal degradation products of 2,5-dimethyl-2,5-di-(tert-butylperoxy) hexane (DBPH), by TG/GC/MS to identify runaway reaction and thermal safety parameters. It also included the determination of time to maximum rate under adiabatic conditions (TMR{sub ad}) and self-accelerating decomposition temperature obtained through Advanced Kinetics and Technology Solutions. The apparent activation energy (E{sub a}) was calculated from differential isoconversional kinetic analysis method using differential scanning calorimetry experiments. The E{sub a} value obtained by Friedman analysis is in the range of 118.0–149.0 kJ mol{sup −1}. The TMR{sub ad} was 24.0 h with an apparent onset temperature of 82.4 °C. This study has also established an efficient benchmark for a thermal hazard assessment of DBPH that can be applied to assure safer storage conditions.

  16. Spectral Tensor-Train Decomposition

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.

    2016-01-01

    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...... adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online (http://pypi.python.org/pypi/TensorToolbox/)....

  17. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  18. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  19. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  20. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  1. Generalized decompositions of dynamic systems and vector Lyapunov functions

    Science.gov (United States)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  2. Income Inequality Decomposition, Russia 1992-2002: Method and Application

    Directory of Open Access Journals (Sweden)

    Wim Jansen

    2013-11-01

    Full Text Available Decomposition methods for income inequality measures, such as the Gini index and the members of the Generalised Entropy family, are widely applied. Most methods decompose income inequality into a between (explained and a within (unexplained part, according to two or more population subgroups or income sources. In this article, we use a regression analysis for a lognormal distribution of personal income, modelling both the mean and the variance, decomposing the variance as a measure of income inequality, and apply the method to survey data from Russia spanning the first decade of market transition (1992-2002. For the first years of the transition, only a small part of the income inequality could be explained. Thereafter, between 1996 and 1999, a larger part (up to 40% could be explained, and ‘winner’ and ‘loser’ categories of the transition could be spotted. Moving to the upper end of the income distribution, the self-employed won from the transition. The unemployed were among the losers.

  3. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  4. Priming of soil carbon decomposition in two inner Mongolia grassland soils following sheep dung addition: A study using13C natural abundance approach

    DEFF Research Database (Denmark)

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heav......To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content......) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ13C = -26.8‰; dung δ13C = -26.2‰) or Cleistogenes squarrosa (C4 plant with δ13C = -14.6‰; dung δ13C = -15.7‰). Fresh C3 and C4 sheep dung was mixed......-amended controls. In both grassland soils, ca. 60% of the evolved CO2 originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg-1 dry soil had been emitted as CO2 for the L. chinensis...

  5. Multilevel domain decomposition for electronic structure calculations

    International Nuclear Information System (INIS)

    Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.

    2007-01-01

    We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure

  6. Meddling with middle modalities: a decomposition approach to mental health inequalities between intersectional gender and economic middle groups in northern Sweden

    Directory of Open Access Journals (Sweden)

    Per E. Gustafsson

    2016-11-01

    Full Text Available Background: Intersectionality has received increased interest within population health research in recent years, as a concept and framework to understand entangled dimensions of health inequalities, such as gender and socioeconomic inequalities in health. However, little attention has been paid to the intersectional middle groups, referring to those occupying positions of mixed advantage and disadvantage. Objective: This article aimed to 1 examine mental health inequalities between intersectional groups reflecting structural positions of gender and economic affluence and 2 decompose any observed health inequalities, among middle groups, into contributions from experiences and conditions representing processes of privilege and oppression. Design: Participants (N=25,585 came from the cross-sectional ‘Health on Equal Terms’ survey covering 16- to 84-year-olds in the four northernmost counties of Sweden. Six intersectional positions were constructed from gender (woman vs. men and tertiles (low vs. medium vs. high of disposable income. Mental health was measured through the General Health Questionnaire-12. Explanatory variables covered areas of material conditions, job relations, violence, domestic burden, and healthcare contacts. Analysis of variance (Aim 1 and Blinder-Oaxaca decomposition analysis (Aim 2 were used. Results: Significant mental health inequalities were found between dominant (high-income women and middle-income men and subordinate (middle-income women and low-income men middle groups. The health inequalities between adjacent middle groups were mostly explained by violence (mid-income women vs. men comparison; material conditions (mid- vs. low-income men comparison; and material needs, job relations, and unmet medical needs (high- vs. mid-income women comparison. Conclusions: The study suggests complex processes whereby dominant middle groups in the intersectional space of economic affluence and gender can leverage strategic

  7. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  8. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  9. Priming of soil carbon decomposition in two Inner Mongolia grassland soils following sheep dung addition: a study using ¹³C natural abundance approach.

    Science.gov (United States)

    Ma, Xiuzhi; Ambus, Per; Wang, Shiping; Wang, Yanfen; Wang, Chengjie

    2013-01-01

    To investigate the effect of sheep dung on soil carbon (C) sequestration, a 152 days incubation experiment was conducted with soils from two different Inner Mongolian grasslands, i.e. a Leymus chinensis dominated grassland representing the climax community (2.1% organic matter content) and a heavily degraded Artemisia frigida dominated community (1.3% organic matter content). Dung was collected from sheep either fed on L. chinensis (C3 plant with δ¹³C = -26.8‰; dung δ¹³C = -26.2‰) or Cleistogenes squarrosa (C₄ plant with δ¹³C = -14.6‰; dung δ¹³C = -15.7‰). Fresh C₃ and C₄ sheep dung was mixed with the two grassland soils and incubated under controlled conditions for analysis of ¹³C-CO₂ emissions. Soil samples were taken at days 17, 43, 86, 127 and 152 after sheep dung addition to detect the δ¹³C signal in soil and dung components. Analysis revealed that 16.9% and 16.6% of the sheep dung C had decomposed, of which 3.5% and 2.8% was sequestrated in the soils of L. chinensis and A. frigida grasslands, respectively, while the remaining decomposed sheep dung was emitted as CO₂. The cumulative amounts of C respired from dung treated soils during 152 days were 7-8 times higher than in the un-amended controls. In both grassland soils, ca. 60% of the evolved CO₂ originated from the decomposing sheep dung and 40% from the native soil C. Priming effects of soil C decomposition were observed in both soils, i.e. 1.4 g and 1.6 g additional soil C kg⁻¹ dry soil had been emitted as CO₂ for the L. chinensis and A. frigida soils, respectively. Hence, the net C losses from L. chinensis and A. frigida soils were 0.6 g and 0.9 g C kg⁻¹ soil, which was 2.6% and 7.0% of the total C in L. chinensis and A. frigida grasslands soils, respectively. Our results suggest that grazing of degraded Inner Mongolian pastures may cause a net soil C loss due to the positive priming effect, thereby accelerating soil deterioration.

  10. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  11. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  12. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  13. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  14. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  15. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  16. Mobility Modelling through Trajectory Decomposition and Prediction

    OpenAIRE

    Faghihi, Farbod

    2017-01-01

    The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...

  17. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  18. Fast modal decomposition for optical fibers using digital holography.

    Science.gov (United States)

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  19. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    Science.gov (United States)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  20. Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients

    Directory of Open Access Journals (Sweden)

    Vlasta Bari

    2014-09-01

    Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

  1. Proton mass decomposition

    Science.gov (United States)

    Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei

    2018-03-01

    We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.

  2. Art of spin decomposition

    International Nuclear Information System (INIS)

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-01-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  3. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions

    Science.gov (United States)

    Wang, Yanxue; Yang, Lin; Xiang, Jiawei; Yang, Jianwei; He, Shuilong

    2017-12-01

    Rolling element bearings are one of the main elements in rotating machines, whose failure may lead to a fatal breakdown and significant economic losses. Conventional vibration-based diagnostic methods are based on the stationary assumption, thus they are not applicable to the diagnosis of bearings working under varying speeds. This constraint limits the bearing diagnosis to the industrial application significantly. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions is proposed in this work, based on computed order tracking (COT) and variational mode decomposition (VMD)-based time frequency representation (VTFR). COT is utilized to resample the non-stationary vibration signal in the angular domain, while VMD is used to decompose the resampled signal into a number of band-limited intrinsic mode functions (BLIMFs). A VTFR is then constructed based on the estimated instantaneous frequency and instantaneous amplitude of each BLIMF. Moreover, the Gini index and time-frequency kurtosis are both proposed to quantitatively measure the sparsity and concentration measurement of time-frequency representation, respectively. The effectiveness of the VTFR for extracting nonlinear components has been verified by a bat signal. Results of this numerical simulation also show the sparsity and concentration of the VTFR are better than those of short-time Fourier transform, continuous wavelet transform, Hilbert-Huang transform and Wigner-Ville distribution techniques. Several experimental results have further demonstrated that the proposed method can well detect bearing faults under variable speed conditions.

  4. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  5. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  6. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  7. Material elemental decomposition in dual and multi-energy CT via a sparsity-dictionary approach for proton stopping power ratio calculation.

    Science.gov (United States)

    Shen, Chenyang; Li, Bin; Chen, Liyuan; Yang, Ming; Lou, Yifei; Jia, Xun

    2018-04-01

    Accurate calculation of proton stopping power ratio (SPR) relative to water is crucial to proton therapy treatment planning, since SPR affects prediction of beam range. Current standard practice derives SPR using a single CT scan. Recent studies showed that dual-energy CT (DECT) offers advantages to accurately determine SPR. One method to further improve accuracy is to incorporate prior knowledge on human tissue composition through a dictionary approach. In addition, it is also suggested that using CT images with multiple (more than two) energy channels, i.e., multi-energy CT (MECT), can further improve accuracy. In this paper, we proposed a sparse dictionary-based method to convert CT numbers of DECT or MECT to elemental composition (EC) and relative electron density (rED) for SPR computation. A dictionary was constructed to include materials generated based on human tissues of known compositions. For a voxel with CT numbers of different energy channels, its EC and rED are determined subject to a constraint that the resulting EC is a linear non-negative combination of only a few tissues in the dictionary. We formulated this as a non-convex optimization problem. A novel algorithm was designed to solve the problem. The proposed method has a unified structure to handle both DECT and MECT with different number of channels. We tested our method in both simulation and experimental studies. Average errors of SPR in experimental studies were 0.70% in DECT, 0.53% in MECT with three energy channels, and 0.45% in MECT with four channels. We also studied the impact of parameter values and established appropriate parameter values for our method. The proposed method can accurately calculate SPR using DECT and MECT. The results suggest that using more energy channels may improve the SPR estimation accuracy. © 2018 American Association of Physicists in Medicine.

  8. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  9. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    Science.gov (United States)

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  10. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  11. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  12. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  13. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  14. Primary decomposition of zero-dimensional ideals over finite fields

    Science.gov (United States)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  15. A Decomposition Approach for Shipboard Manpower Scheduling

    Science.gov (United States)

    2009-01-01

    generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates

  16. Bayesian approach to magnetotelluric tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Červ, Václav; Pek, Josef; Menvielle, M.

    2010-01-01

    Roč. 53, č. 2 (2010), s. 21-32 ISSN 1593-5213 R&D Projects: GA AV ČR IAA200120701; GA ČR GA205/04/0746; GA ČR GA205/07/0292 Institutional research plan: CEZ:AV0Z30120515 Keywords : galvanic distortion * telluric distortion * impedance tensor * basic procedure * inversion * noise Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.336, year: 2010

  17. NRSA enzyme decomposition model data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...

  18. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  19. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  20. Tensor gauge condition and tensor field decomposition

    Science.gov (United States)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  1. Real interest parity decomposition

    Directory of Open Access Journals (Sweden)

    Alex Luiz Ferreira

    2009-09-01

    Full Text Available The aim of this paper is to investigate the general causes of real interest rate differentials (rids for a sample of emerging markets for the period of January 1996 to August 2007. To this end, two methods are applied. The first consists of breaking the variance of rids down into relative purchasing power pariety and uncovered interest rate parity and shows that inflation differentials are the main source of rids variation; while the second method breaks down the rids and nominal interest rate differentials (nids into nominal and real shocks. Bivariate autoregressive models are estimated under particular identification conditions, having been adequately treated for the identified structural breaks. Impulse response functions and error variance decomposition result in real shocks as being the likely cause of rids.O objetivo deste artigo é investigar as causas gerais dos diferenciais da taxa de juros real (rids para um conjunto de países emergentes, para o período de janeiro de 1996 a agosto de 2007. Para tanto, duas metodologias são aplicadas. A primeira consiste em decompor a variância dos rids entre a paridade do poder de compra relativa e a paridade de juros a descoberto e mostra que os diferenciais de inflação são a fonte predominante da variabilidade dos rids; a segunda decompõe os rids e os diferenciais de juros nominais (nids em choques nominais e reais. Sob certas condições de identificação, modelos autorregressivos bivariados são estimados com tratamento adequado para as quebras estruturais identificadas e as funções de resposta ao impulso e a decomposição da variância dos erros de previsão são obtidas, resultando em evidências favoráveis a que os choques reais são a causa mais provável dos rids.

  2. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  3. MADCam: The multispectral active decomposition camera

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille

    2001-01-01

    A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...... that utilised information drawn from the temporal dimension instead of the traditional spatial approach. Using the CIF format (352x288) frame rates up to 30 Hz are obtained and in VGA mode (640x480) up to 15 Hz....

  4. Comparing structural decomposition analysis and index

    International Nuclear Information System (INIS)

    Hoekstra, Rutger; Van den Bergh, Jeroen C.J.M.

    2003-01-01

    To analyze and understand historical changes in economic, environmental, employment or other socio-economic indicators, it is useful to assess the driving forces or determinants that underlie these changes. Two techniques for decomposing indicator changes at the sector level are structural decomposition analysis (SDA) and index decomposition analysis (IDA). For example, SDA and IDA have been used to analyze changes in indicators such as energy use, CO 2 -emissions, labor demand and value added. The changes in these variables are decomposed into determinants such as technological, demand, and structural effects. SDA uses information from input-output tables while IDA uses aggregate data at the sector-level. The two methods have developed quite independently, which has resulted in each method being characterized by specific, unique techniques and approaches. This paper has three aims. First, the similarities and differences between the two approaches are summarized. Second, the possibility of transferring specific techniques and indices is explored. Finally, a numerical example is used to illustrate differences between the two approaches

  5. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  6. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  7. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  8. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  9. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  10. On the correspondence between data revision and trend-cycle decomposition

    NARCIS (Netherlands)

    Dungey, M.; Jacobs, J. P. A. M.; Tian, J.; van Norden, S.

    2013-01-01

    This article places the data revision model of Jacobs and van Norden (2011) within a class of trend-cycle decompositions relating directly to the Beveridge-Nelson decomposition. In both these approaches, identifying restrictions on the covariance matrix under simple and realistic conditions may

  11. Theoretical and experimental study: the size dependence of decomposition thermodynamics of nanomaterials

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Li, Wenjiao; Xue, Yongqiang

    2015-01-01

    In the processes of preparation and application of nanomaterials, the decomposition reactions of nanomaterials are often involved. However, there is a dramatic difference in decomposition thermodynamics between nanomaterials and the bulk counterparts, and the difference depends on the size of the particles that compose the nanomaterials. In this paper, the decomposition model of a nanoparticle was built, the theory of decomposition thermodynamics of nanomaterials was proposed, and the relations of the size dependence of thermodynamic quantities for the decomposition reactions were deduced. In experiment, taking the thermal decomposition of nano-Cu 2 (OH) 2 CO 3 with different particle sizes (the range of radius is at 8.95–27.4 nm) as a system, the reaction thermodynamic quantities were determined, and the regularities of size dependence of the quantities were summarized. These experimental regularities consist with the above thermodynamic relations. The results show that there is a significant effect of the size of particles composing a nanomaterial on the decomposition thermodynamics. When all the decomposition products are gases, the differences in thermodynamic quantities of reaction between the nanomaterials and the bulk counterparts depend on the particle size; while when one of the decomposition products is a solid, the differences depend on both the initial particle size of the nanoparticle and the decomposition ratio. When the decomposition ratio is very small, these differences are only related to the initial particle size; and when the radius of the nanoparticles approaches or exceeds 10 nm, the reaction thermodynamic functions and the logarithm of the equilibrium constant are linearly associated with the reciprocal of radius, respectively. The thermodynamic theory can quantificationally describe the regularities of the size dependence of thermodynamic quantities for decomposition reactions of nanomaterials, and contribute to the researches and the

  12. Lie bialgebras with triangular decomposition

    International Nuclear Information System (INIS)

    Andruskiewitsch, N.; Levstein, F.

    1992-06-01

    Lie bialgebras originated in a triangular decomposition of the underlying Lie algebra are discussed. The explicit formulas for the quantization of the Heisenberg Lie algebra and some motion Lie algebras are given, as well as the algebra of rational functions on the quantum Heisenberg group and the formula for the universal R-matrix. (author). 17 refs

  13. Decomposition of metal nitrate solutions

    International Nuclear Information System (INIS)

    Haas, P.A.; Stines, W.B.

    1982-01-01

    Oxides in powder form are obtained from aqueous solutions of one or more heavy metal nitrates (e.g. U, Pu, Th, Ce) by thermal decomposition at 300 to 800 deg C in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal. (author)

  14. Probability inequalities for decomposition integrals

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2017-01-01

    Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

  15. Thermal decomposition of ammonium hexachloroosmate

    DEFF Research Database (Denmark)

    Asanova, T I; Kantor, Innokenty; Asanov, I. P.

    2016-01-01

    Structural changes of (NH4)2[OsCl6] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH4)2[OsCl6] transforms directly to meta...

  16. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  17. Efficient morse decompositions of vector fields.

    Science.gov (United States)

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  18. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  19. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  20. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    Science.gov (United States)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  1. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  2. Universality of Schmidt decomposition and particle identity

    Science.gov (United States)

    Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe

    2017-03-01

    Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.

  3. Spectral decomposition of nonlinear systems with memory

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  4. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  5. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  6. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  7. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  8. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  9. IN SITU INFRARED STUDY OF CATALYTIC DECOMPOSITION OF NITRIC OXIDE (NO); FINAL

    International Nuclear Information System (INIS)

    Unknown

    1999-01-01

    The growing concerns for the environment and increasingly stringent standards for NO emission have presented a major challenge to control NO emissions from electric utility plants and automobiles. Catalytic decomposition of NO is the most attractive approach for the control of NO emission for its simplicity. Successful development of an effective catalyst for NO decomposition will greatly decrease the equipment and operation cost of NO control. Due to lack of understanding of the mechanism of NO decomposition, efforts on the search of an effective catalyst have been unsuccessful. Scientific development of an effective catalyst requires fundamental understanding of the nature of active site, the rate-limiting step, and an approach to prolong the life of the catalyst. The authors have investigated the feasibility of two novel approaches for improving catalyst activity and resistance to sintering. The first approach is the use of silanation to stabilize metal crystallites and supports for Cu-ZSM-5 and promoted Pt catalysts; the second is utilization of oxygen spillover and desorption to enhance NO decomposition activity. The silanation approach failed to stabilize Cu-ZSM-5 activity under hydrothermal condition. Silanation blocked the oxygen migration and inhibited oxygen desorption. Oxygen spillover was found to be an effective approach for promoting NO decomposition activity on Pt-based catalysts. Detailed mechanistic study revealed the oxygen inhibition in NO decomposition and reduction as the most critical issue in developing an effective catalytic approach for controlling NO emission

  10. Investigating the role of male advantage and female disadvantage in explaining the discrimination effect of the gender pay gap in the Cameroon labor market. Oaxaca-Ransom decomposition approach

    Directory of Open Access Journals (Sweden)

    Dickson Thomas NDAMSA

    2015-05-01

    Full Text Available The paper assesses the sources of gender-based wage differentials and investigates the relative importance of the endowment effect, female disadvantage and male advantage in explaining gender-based wage differentials in the Cameroon labor market. Use is made of the Ordinary Least Square technique and the Oaxaca-Ransom decomposition. Oaxaca-Ransom decomposition results show that primary education, secondary education, tertiary education and professional training are sources of the gender pay gap. Our results also underline the importance of working experience, formal sector employment and urban residency in explaining wage differentials between male and female workers in the Cameroon labour market. Our findings reveal that education human capital explains a greater portion of the endowment effect and contributes little to the discrimination effect. Essentially, we observe that the discrimination effect has a worsening effect on the gender pay gap compared to the mitigating role of the endowment effect. Again, our results show that a greater part of the discrimination effect of the gender pay gap is attributed to female disadvantage in the Cameroon labor market.

  11. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  12. Generalized Benders’ Decomposition for topology optimization problems

    DEFF Research Database (Denmark)

    Munoz Queupumil, Eduardo Javier; Stolpe, Mathias

    2011-01-01

    ) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality.......This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness...

  13. Decomposition of childhood malnutrition in Cambodia.

    Science.gov (United States)

    Sunil, Thankam S; Sagna, Marguerite

    2015-10-01

    Childhood malnutrition is a major problem in developing countries, and in Cambodia, it is estimated that approximately 42% of the children are stunted, which is considered to be very high. In the present study, we examined the effects of proximate and socio-economic determinants on childhood malnutrition in Cambodia. In addition, we examined the effects of the changes in these proximate determinants on childhood malnutrition between 2000 and 2005. Our analytical approach included descriptive, logistic regression and decomposition analyses. Separate analyses are estimated for 2000 and 2005 survey. The primary component of the difference in stunting is attributable to the rates component, indicating that the decrease of stunting is due mainly to the decrease in stunting rates between 2000 and 2005. While majority of the differences in childhood malnutrition between 2000 and 2005 can be attributed to differences in the distribution of malnutrition determinants between 2000 and 2005, differences in their effects also showed some significance. © 2013 John Wiley & Sons Ltd.

  14. Horizontal decomposition of data table for finding one reduct

    Science.gov (United States)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  15. Solving network design problems via decomposition, aggregation and approximation

    CERN Document Server

    Bärmann, Andreas

    2016-01-01

    Andreas Bärmann develops novel approaches for the solution of network design problems as they arise in various contexts of applied optimization. At the example of an optimal expansion of the German railway network until 2030, the author derives a tailor-made decomposition technique for multi-period network design problems. Next, he develops a general framework for the solution of network design problems via aggregation of the underlying graph structure. This approach is shown to save much computation time as compared to standard techniques. Finally, the author devises a modelling framework for the approximation of the robust counterpart under ellipsoidal uncertainty, an often-studied case in the literature. Each of these three approaches opens up a fascinating branch of research which promises a better theoretical understanding of the problem and an increasing range of solvable application settings at the same time. Contents Decomposition for Multi-Period Network Design Solving Network Design Problems via Ag...

  16. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  17. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  18. Dolomite decomposition under CO2

    International Nuclear Information System (INIS)

    Guerfa, F.; Bensouici, F.; Barama, S.E.; Harabi, A.; Achour, S.

    2004-01-01

    Full text.Dolomite (MgCa (CO 3 ) 2 is one of the most abundant mineral species on the surface of the planet, it occurs in sedimentary rocks. MgO, CaO and Doloma (Phase mixture of MgO and CaO, obtained from the mineral dolomite) based materials are attractive steel-making refractories because of their potential cost effectiveness and world wide abundance more recently, MgO is also used as protective layers in plasma screen manufacture ceel. The crystal structure of dolomite was determined as rhombohedral carbonates, they are layers of Mg +2 and layers of Ca +2 ions. It dissociates depending on the temperature variations according to the following reactions: MgCa (CO 3 ) 2 → MgO + CaO + 2CO 2 .....MgCa (CO 3 ) 2 → MgO + Ca + CaCO 3 + CO 2 .....This latter reaction may be considered as a first step for MgO production. Differential thermal analysis (DTA) are used to control dolomite decomposition and the X-Ray Diffraction (XRD) was used to elucidate thermal decomposition of dolomite according to the reaction. That required samples were heated to specific temperature and holding times. The average particle size of used dolomite powders is 0.3 mm, as where, the heating temperature was 700 degree celsius, using various holding times (90 and 120 minutes). Under CO 2 dolomite decomposed directly to CaCO 3 accompanied by the formation of MgO, no evidence was offered for the MgO formation of either CaO or MgCO 3 , under air, simultaneous formation of CaCO 3 , CaO and accompanied dolomite decomposition

  19. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  20. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  1. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  2. Decomposition in pelagic marine ecosytems

    International Nuclear Information System (INIS)

    Lucas, M.I.

    1986-01-01

    During the decomposition of plant detritus, complex microbial successions develop which are dominated in the early stages by a number of distinct bacterial morphotypes. The microheterotrophic community rapidly becomes heterogenous and may include cyanobacteria, fungi, yeasts and bactivorous protozoans. Microheterotrophs in the marine environment may have a biomass comparable to that of all other heterotrophs and their significance as a resource to higher trophic orders, and in the regeneration of nutrients, particularly nitrogen, that support 'regenerated' primary production, has aroused both attention and controversy. Numerous methods have been employed to measure heterotrophic bacterial production and activity. The most widely used involve estimates of 14 C-glucose uptake; the frequency of dividing cells; the incorporation of 3 H-thymidine and exponential population growth in predator-reduced filtrates. Recent attempts to model decomposition processes and C and N fluxes in pelagic marine ecosystems are described. This review examines the most sensitive components and predictions of the models with particular reference to estimates of bacterial production, net growth yield and predictions of N cycling determined by 15 N methodology. Directed estimates of nitrogen (and phosphorus) flux through phytoplanktonic and bacterioplanktonic communities using 15 N (and 32 P) tracer methods are likely to provide more realistic measures of nitrogen flow through planktonic communities

  3. Infrared multiphoton absorption and decomposition

    International Nuclear Information System (INIS)

    Evans, D.K.; McAlpine, R.D.

    1984-01-01

    The discovery of infrared laser induced multiphoton absorption (IRMPA) and decomposition (IRMPD) by Isenor and Richardson in 1971 generated a great deal of interest in these phenomena. This interest was increased with the discovery by Ambartzumian, Letokhov, Ryadbov and Chekalin that isotopically selective IRMPD was possible. One of the first speculations about these phenomena was that it might be possible to excite a particular mode of a molecule with the intense infrared laser beam and cause decomposition or chemical reaction by channels which do not predominate thermally, thus providing new synthetic routes for complex chemicals. The potential applications to isotope separation and novel chemistry stimulated efforts to understand the underlying physics and chemistry of these processes. At ICOMP I, in 1977 and at ICOMP II in 1980, several authors reviewed the current understandings of IRMPA and IRMPD as well as the particular aspect of isotope separation. There continues to be a great deal of effort into understanding IRMPA and IRMPD and we will briefly review some aspects of these efforts with particular emphasis on progress since ICOMP II. 31 references

  4. Decomposition of Diethylstilboestrol in Soil

    DEFF Research Database (Denmark)

    Gregers-Hansen, Birte

    1964-01-01

    The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after the e...... not inhibit the CO2 production from the soil.Experiments with γ-sterilized soil indicated that enzymes present in the soil are able to attack DES.......The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after...

  5. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  6. Public Pensions as the Great Equalizer? Decomposition of Old-Age Income Inequality in South Korea, 1998-2010.

    Science.gov (United States)

    Hwang, Sun-Jae

    2016-01-01

    This study examines the redistributive effects of public pensions on old-age income inequality, testing whether public pensions function as the "great equalizer." Unlike the well-known alleviating effect of public pensions on old-age poverty, the effects of public pensions on old-age income inequality more generally have been less examined, particularly outside Western countries. Using repeated cross-sectional data of elderly Koreans between 1998 and 2010, we applied Gini coefficient decomposition to measure the impact of various income sources on old-age inequality, particularly focusing on public pensions. Our findings show that, contrary to expectations, public pension benefits have inequality-intensifying effects on old-age income in Korea, even countervailing the alleviating effects of public assistance. This rather surprising result is due to the specific institutional context of the Korean public pension system and suggests that the "structuring" of welfare policies could be as important as their expansion for the elderly, particularly for developing welfare states.

  7. Decomposition kinetics of plutonium hydride

    Energy Technology Data Exchange (ETDEWEB)

    Haschke, J.M.; Stakebake, J.L.

    1979-01-01

    Kinetic data for decomposition of PuH/sub 1/ /sub 95/ provides insight into a possible mechanism for the hydriding and dehydriding reactions of plutonium. The fact that the rate of the hydriding reaction, K/sub H/, is proportional to P/sup 1/2/ and the rate of the dehydriding process, K/sub D/, is inversely proportional to P/sup 1/2/ suggests that the forward and reverse reactions proceed by opposite paths of the same mechanism. The P/sup 1/2/ dependence of hydrogen solubility in metals is characteristic of the dissociative absorption of hydrogen; i.e., the reactive species is atomic hydrogen. It is reasonable to assume that the rates of the forward and reverse reactions are controlled by the surface concentration of atomic hydrogen, (H/sub s/), that K/sub H/ = c'(H/sub s/), and that K/sub D/ = c/(H/sub s/), where c' and c are proportionality constants. For this surface model, the pressure dependence of K/sub D/ is related to (H/sub s/) by the reaction (H/sub s/) reversible 1/2H/sub 2/(g) and by its equilibrium constant K/sub e/ = (H/sub 2/)/sup 1/2//(H/sub s/). In the pressure range of ideal gas behavior, (H/sub s/) = K/sub e//sup -1/(RT)/sup -1/2/ and the decomposition rate is given by K/sub D/ = cK/sub e/(RT)/sup -1/2/P/sup 1/2/. For an analogous treatment of the hydriding process with this model, it can be readily shown that K/sub H/ = c'K/sub e//sup -1/(RT)/sup -1/2/P/sup 1/2/. The inverse pressure dependence and direct temperature dependence of the decomposition rate are correctly predicted by this mechanism which is most consistent with the observed behavior of the Pu--H system.

  8. Fate of mercury in tree litter during decomposition

    Directory of Open Access Journals (Sweden)

    A. K. Pokharel

    2011-09-01

    Full Text Available We performed a controlled laboratory litter incubation study to assess changes in dry mass, carbon (C mass and concentration, mercury (Hg mass and concentration, and stoichiometric relations between elements during decomposition. Twenty-five surface litter samples each, collected from four forest stands, were placed in incubation jars open to the atmosphere, and were harvested sequentially at 0, 3, 6, 12, and 18 months. Using a mass balance approach, we observed significant mass losses of Hg during decomposition (5 to 23 % of initial mass after 18 months, which we attribute to gaseous losses of Hg to the atmosphere through a gas-permeable filter covering incubation jars. Percentage mass losses of Hg generally were less than observed dry mass and C mass losses (48 to 63 % Hg loss per unit dry mass loss, although one litter type showed similar losses. A field control study using the same litter types exposed at the original collection locations for one year showed that field litter samples were enriched in Hg concentrations by 8 to 64 % compared to samples incubated for the same time period in the laboratory, indicating strong additional sorption of Hg in the field likely from atmospheric deposition. Solubility of Hg, assessed by exposure of litter to water upon harvest, was very low (<0.22 ng Hg g−1 dry mass and decreased with increasing stage of decomposition for all litter types. Our results indicate potentially large gaseous emissions, or re-emissions, of Hg originally associated with plant litter upon decomposition. Results also suggest that Hg accumulation in litter and surface layers in the field is driven mainly by additional sorption of Hg, with minor contributions from "internal" accumulation due to preferential loss of C over Hg. Litter types showed highly species-specific differences in Hg levels during decomposition suggesting that emissions, retention, and sorption of Hg are dependent on litter type.

  9. Spinodal decomposition in fluid mixtures

    International Nuclear Information System (INIS)

    Kawasaki, Kyozi; Koga, Tsuyoshi

    1993-01-01

    We study the late stage dynamics of spinodal decomposition in binary fluids by the computer simulation of the time-dependent Ginzburg-Landau equation. We obtain a temporary linear growth law of the characteristic length of domains in the late stage. This growth law has been observed in many real experiments of binary fluids and indicates that the domain growth proceeds by the flow caused by the surface tension of interfaces. We also find that the dynamical scaling law is satisfied in this hydrodynamic domain growth region. By comparing the scaling functions for fluids with that for the case without hydrodynamic effects, we find that the scaling functions for the two systems are different. (author)

  10. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  11. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  12. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  13. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  14. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  15. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  16. Decomposition Methods For a Piv Data Analysis with Application to a Boundary Layer Separation Dynamics

    OpenAIRE

    Václav URUBA

    2010-01-01

    Separation of the turbulent boundary layer (BL) on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition) and its extension BOD (Bi-Orthogonal Decomposition) techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns) method. The study contributes...

  17. Detailed RIF decomposition with selection : the gender pay gap in Italy

    OpenAIRE

    Töpfer, Marina

    2017-01-01

    In this paper, we estimate the gender pay gap along the wage distribution using a detailed decomposition approach based on unconditional quantile regressions. Non-randomness of the sample leads to biased and inconsistent estimates of the wage equation as well as of the components of the wage gap. Therefore, the method is extended to account for sample selection problems. The decomposition is conducted by using Italian microdata. Accounting for labor market selection may be particularly rele...

  18. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  19. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  20. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  1. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2017-11-01

    Full Text Available Sulfur hexafluoride (SF6 gas-insulated electrical equipment is widely used in high-voltage (HV and extra-high-voltage (EHV power systems. Partial discharge (PD and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES and infrared (IR spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Science.gov (United States)

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  3. Nutrient-enhanced decomposition of plant biomass in a freshwater wetland

    Science.gov (United States)

    Bodker, James E.; Turner, Robert Eugene; Tweel, Andrew; Schulz, Christopher; Swarzenski, Christopher M.

    2015-01-01

    We studied soil decomposition in a Panicum hemitomon (Schultes)-dominated freshwater marsh located in southeastern Louisiana that was unambiguously changed by secondarily-treated municipal wastewater effluent. We used four approaches to evaluate how belowground biomass decomposition rates vary under different nutrient regimes in this marsh. The results of laboratory experiments demonstrated how nutrient enrichment enhanced the loss of soil or plant organic matter by 50%, and increased gas production. An experiment demonstrated that nitrogen, not phosphorus, limited decomposition. Cellulose decomposition at the field site was higher in the flowfield of the introduced secondarily treated sewage water, and the quality of the substrate (% N or % P) was directly related to the decomposition rates. We therefore rejected the null hypothesis that nutrient enrichment had no effect on the decomposition rates of these organic soils. In response to nutrient enrichment, plants respond through biomechanical or structural adaptations that alter the labile characteristics of plant tissue. These adaptations eventually change litter type and quality (where the marsh survives) as the % N content of plant tissue rises and is followed by even higher decomposition rates of the litter produced, creating a positive feedback loop. Marsh fragmentation will increase as a result. The assumptions and conditions underlying the use of unconstrained wastewater flow within natural wetlands, rather than controlled treatment within the confines of constructed wetlands, are revealed in the loss of previously sequestered carbon, habitat, public use, and other societal benefits.

  4. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Abdeldjalil Aïssa-El-Bey

    2007-03-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  5. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Aïssa-El-Bey Abdeldjalil

    2007-01-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  6. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  7. Thermal decomposition of lanthanide and actinide tetrafluorides

    International Nuclear Information System (INIS)

    Gibson, J.K.; Haire, R.G.

    1988-01-01

    The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs

  8. Decomposition of lake phytoplankton. 1

    International Nuclear Information System (INIS)

    Hansen, L.; Krog, G.F.; Soendergaard, M.

    1986-01-01

    Short-time (24 h) and long-time (4-6 d) decomposition of phytoplankton cells were investigasted under in situ conditions in four Danish lakes. Carbon-14-labelled, dead algae were exposed to sterile or natural lake water and the dynamics of cell lysis and bacterial utilization of the leached products were followed. The lysis process was dominated by an initial fast water extraction. Within 2 to 4 h from 4 to 34% of the labelled carbon leached from the algal cells. After 24 h from 11 to 43% of the initial particulate carbon was found as dissolved carbon in the experiments with sterile lake water; after 4 to 6 d the leaching was from 67 to 78% of the initial 14 C. The leached compounds were utilized by bacteria. A comparison of the incubations using sterile and natural water showed that a mean of 71% of the lysis products was metabolized by microorganisms within 24 h. In two experiments the uptake rate equalled the leaching rate. (author)

  9. Decomposition of lake phytoplankton. 2

    International Nuclear Information System (INIS)

    Hansen, L.; Krog, G.F.; Soendergaard, M.

    1986-01-01

    The lysis process of phytoplankton was followed in 24 h incubations in three Danish lakes. By means of gel-chromatography it was shown that the dissolved carbon leaching from different algal groups differed in molecular weight composition. Three distinct molecular weight classes (>10,000; 700 to 10,000 and < 700 Daltons) leached from blue-green algae in almost equal proportion. The lysis products of spring-bloom diatoms included only the two smaller size classes, and the molecules between 700 and 10,000 Daltons dominated. Measurements of cell content during decomposition of the diatoms revealed polysaccharides and low molecular weight compounds to dominate the lysis products. No proteins were leached during the first 24 h after cell death. By incubating the dead algae in natural lake water, it was possible to detect a high bacterial affinity towards molecules between 700 and 10,000 Daltons, although the other size classes were also utilized. Bacterial transformation of small molecules to larger molecules could be demonstrated. (author)

  10. Thermal decomposition of titanium deuteride thin films

    International Nuclear Information System (INIS)

    Malinowski, M.E.

    1983-01-01

    The thermal desorption spectra of deuterium from essentially clean titanium deuteride thin films were measured by ramp heating the films in vacuum; the film thicknesses ranged from 20 to 220 nm and the ramp rates varied from 0.5 to about 3 0 C s - 1 . Each desorption spectrum consisted of a low nearly constant rate at low temperatures followed by a highly peaked rate at higher temperatures. The cleanliness and thinness of the films permitted a description of desorption rates in terms of a simple phenomenological model based on detailed balancing in which the low temperature pressure-composition characteristics of the two-phase (α-(α+#betta#)-#betta#) region of the Ti-D system were used as input data. At temperatures below 340 0 C the model predictions were in excellent agreement with the experimentally measured desorption spectra. Interpretations of the spectra in terms of 'decomposition trajectories'' are possible using this model, and this approach is also used to explain deviations of the spectra from the model at temperatures of 340 0 C and above. (Auth.)

  11. Structure for the decomposition of safeguards responsibilities

    International Nuclear Information System (INIS)

    Dugan, V.L.; Chapman, L.D.

    1977-01-01

    A major mission of safeguards is to protect against the use of nuclear materials by adversaries to harm society. A hierarchical structure of safeguards responsibilities and activities to assist in this mission is defined. The structure begins with the definition of international or multi-national safeguards and continues through domestic, regional, and facility safeguards. The facility safeguards is decomposed into physical protection and material control responsibilities. In addition, in-transit safeguards systems are considered. An approach to the definition of performance measures for a set of Generic Adversary Action Sequence Segments (GAASS) is illustrated. These GAASS's begin outside facility boundaries and terminate at some adversary objective which could lead to eventual safeguards risks and societal harm. Societal harm is primarily the result of an adversary who is successful in the theft of special nuclear material or in the sabotage of vital systems which results in the release of material in situ. With the facility safeguards system, GAASS's are defined in terms of authorized and unauthorized adversary access to materials and components, acquisition of material, unauthorized removal of material, and the compromise of vital components. Each GAASS defines a set of ''paths'' (ordered set of physical protection components) and each component provides one or more physical protection ''functions'' (detection, assessment, communication, delay, neutralization). Functional performance is then developed based upon component design features, the environmental factors, and the adversary attributes. An example of this decomposition is presented

  12. Structure for the decomposition of safeguards responsibilities

    International Nuclear Information System (INIS)

    Dugan, V.L.; Chapman, L.D.

    1977-08-01

    A major mission of safeguards is to protect against the use of nuclear materials by adversaries to harm society. A hierarchical structure of safeguards responsibilities and activities to assist in this mission is defined. The structure begins with the definition of international or multi-national safeguards and continues through domestic, regional, and facility safeguards. The facility safeguards is decomposed into physical protection and material control responsibilities. In addition, in-transit safeguards systems are considered. An approach to the definition of performance measures for a set of Generic Adversary Action Sequence Segments (GAASS) is illustrated. These GAASS's begin outside facility boundaries and terminate at some adversary objective which could lead to eventual safeguards risks and societal harm. Societal harm is primarily the result of an adversary who is successful in the theft of special nuclear material or in the sabotage of vital systems which results in the release of material in situ. With the facility safeguards system, GAASS's are defined in terms of authorized and unauthorized adversary access to materials and components, acquisition of material, unauthorized removal of material, and the compromise of vital components. Each GAASS defines a set of ''paths'' (ordered set of physical protection components) and each component provides one or more physical protection ''functions'' (detection, assessment, communication, delay, neutralization). Functional performance is then developed based upon component design features, the environmental factors, and the adversary attributes. An example of this decomposition is presented

  13. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    Science.gov (United States)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  14. 3D quantitative analysis of early decomposition changes of the human face.

    Science.gov (United States)

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2018-03-01

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  15. Model-free method for isothermal and non-isothermal decomposition kinetics analysis of PET sample

    International Nuclear Information System (INIS)

    Saha, B.; Maiti, A.K.; Ghoshal, A.K.

    2006-01-01

    Pyrolysis, one possible alternative to recover valuable products from waste plastics, has recently been the subject of renewed interest. In the present study, the isoconversion methods, i.e., Vyazovkin model-free approach is applied to study non-isothermal decomposition kinetics of waste PET samples using various temperature integral approximations such as Coats and Redfern, Gorbachev, and Agrawal and Sivasubramanian approximation and direct integration (recursive adaptive Simpson quadrature scheme) to analyze the decomposition kinetics. The results show that activation energy (E α ) is a weak but increasing function of conversion (α) in case of non-isothermal decomposition and strong and decreasing function of conversion in case of isothermal decomposition. This indicates possible existence of nucleation, nuclei growth and gas diffusion mechanism during non-isothermal pyrolysis and nucleation and gas diffusion mechanism during isothermal pyrolysis. Optimum E α dependencies on α obtained for non-isothermal data showed similar nature for all the types of temperature integral approximations

  16. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    Science.gov (United States)

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  17. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    International Nuclear Information System (INIS)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T.

    2014-01-01

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N 2 . - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N 2 , NH 3 , HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N 2

  18. A Decomposition Theorem for Finite Automata.

    Science.gov (United States)

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  19. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  20. Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils

    Energy Technology Data Exchange (ETDEWEB)

    Kautz, Elizabeth J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jana, Saumyadeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Devaraj, Arun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lavender, Curt A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sweet, Lucas E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Joshi, Vineet V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-07-31

    This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).

  1. Joint Matrices Decompositions and Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.

    2014-01-01

    Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf

  2. Review on Thermal Decomposition of Ammonium Nitrate

    Science.gov (United States)

    Chaturvedi, Shalini; Dave, Pragnesh N.

    2013-01-01

    In this review data from the literature on thermal decomposition of ammonium nitrate (AN) and the effect of additives to their thermal decomposition are summarized. The effect of additives like oxides, cations, inorganic acids, organic compounds, phase-stablized CuO, etc., is discussed. The effect of an additive mainly occurs at the exothermic peak of pure AN in a temperature range of 200°C to 140°C.

  3. Operator Decomposition Framework for Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Khalik, Hany S.; Wang, Congjian; Bang, Young Suk [North Carolina State University, Raleigh (United States)

    2012-05-15

    This summary describes a new framework for perturbation theory intended to improve its performance, in terms of the associated computational cost and the complexity of implementation, for routine reactor calculations in support of design, analysis, and regulation. Since its first introduction in reactor analysis by Winger, perturbation theory has assumed an aura of sophistication with regard to its implementation and its capabilities. Only few reactor physicists, typically mathematically proficient, have contributed to its development, with the general body of the nuclear engineering community remaining unaware of its current status, capabilities, and challenges. Given its perceived sophistication and the small body of community users, the application of perturbation theory has been limited to investigatory analyses only. It is safe to say that the nuclear community is split into two groups, a small one which understands the theory and, and a much bigger group with the perceived notion that perturbation theory is nothing but a fancy mathematical approach that has very little use in practice. Over the past three years, research has demonstrated two goals. First, reduce the computational cost of perturbation theory in order to enable its use for routine reactor calculations. Second, expose some of the myth about perturbation theory and present it in a form that is simple and relatable in order to stimulate the interest of nuclear practitioners, especially those who are currently working on the development of next generation reactor design and analysis tools. The operator decomposition approach has its roots in linear algebra and can be easily understood by code developers, especially those involved in the design of iterative numerical solution strategies

  4. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  5. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  6. A Structural Model Decomposition Framework for Systems Health Management

    Science.gov (United States)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  7. A structural model decomposition framework for systems health management

    Science.gov (United States)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  8. Proof of the 1-factorization and Hamilton decomposition conjectures

    CERN Document Server

    Csaba, Béla; Lo, Allan; Osthus, Deryk; Treglown, Andrew

    2016-01-01

    In this paper the authors prove the following results (via a unified approach) for all sufficiently large n: (i) [1-factorization conjecture] Suppose that n is even and D\\geq 2\\lceil n/4\\rceil -1. Then every D-regular graph G on n vertices has a decomposition into perfect matchings. Equivalently, \\chi'(G)=D. (ii) [Hamilton decomposition conjecture] Suppose that D \\ge \\lfloor n/2 \\rfloor . Then every D-regular graph G on n vertices has a decomposition into Hamilton cycles and at most one perfect matching. (iii) [Optimal packings of Hamilton cycles] Suppose that G is a graph on n vertices with minimum degree \\delta\\ge n/2. Then G contains at least {\\rm reg}_{\\rm even}(n,\\delta)/2 \\ge (n-2)/8 edge-disjoint Hamilton cycles. Here {\\rm reg}_{\\rm even}(n,\\delta) denotes the degree of the largest even-regular spanning subgraph one can guarantee in a graph on n vertices with minimum degree \\delta. (i) was first explicitly stated by Chetwynd and Hilton. (ii) and the special case \\delta= \\lceil n/2 \\rceil of (iii) answe...

  9. Microbiological decomposition of bagasse after radiation pasteurization

    International Nuclear Information System (INIS)

    Ito, Hitoshi; Ishigaki, Isao

    1987-01-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)

  10. Decomposition of tetrachloroethylene by ionizing radiation

    International Nuclear Information System (INIS)

    Hakoda, T.; Hirota, K.; Hashimoto, S.

    1998-01-01

    Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation

  11. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  12. Fate of mercury in tree litter during decomposition

    Science.gov (United States)

    Pokharel, A. K.; Obrist, D.

    2011-09-01

    We performed a controlled laboratory litter incubation study to assess changes in dry mass, carbon (C) mass and concentration, mercury (Hg) mass and concentration, and stoichiometric relations between elements during decomposition. Twenty-five surface litter samples each, collected from four forest stands, were placed in incubation jars open to the atmosphere, and were harvested sequentially at 0, 3, 6, 12, and 18 months. Using a mass balance approach, we observed significant mass losses of Hg during decomposition (5 to 23 % of initial mass after 18 months), which we attribute to gaseous losses of Hg to the atmosphere through a gas-permeable filter covering incubation jars. Percentage mass losses of Hg generally were less than observed dry mass and C mass losses (48 to 63 % Hg loss per unit dry mass loss), although one litter type showed similar losses. A field control study using the same litter types exposed at the original collection locations for one year showed that field litter samples were enriched in Hg concentrations by 8 to 64 % compared to samples incubated for the same time period in the laboratory, indicating strong additional sorption of Hg in the field likely from atmospheric deposition. Solubility of Hg, assessed by exposure of litter to water upon harvest, was very low (associated with plant litter upon decomposition. Results also suggest that Hg accumulation in litter and surface layers in the field is driven mainly by additional sorption of Hg, with minor contributions from "internal" accumulation due to preferential loss of C over Hg. Litter types showed highly species-specific differences in Hg levels during decomposition suggesting that emissions, retention, and sorption of Hg are dependent on litter type.

  13. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  14. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  15. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  16. Decomposition of forest products buried in landfills

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.

    2013-01-01

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  17. Decomposition of forest products buried in landfills

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)

    2013-11-15

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  18. Young Children's Thinking About Decomposition: Early Modeling Entrees to Complex Ideas in Science

    Science.gov (United States)

    Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona

    2013-10-01

    This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included in-class observations of different types of soil and soil profiling, visits to the school's compost bin, structured observations of decaying organic matter of various kinds, study of organisms that live in the soil, and models of environmental conditions that affect rates of decomposition. Both before and after instruction, students completed a written performance assessment that asked them to reason about the process of decomposition. Additional information was gathered through one-on-one interviews with six focus students who represented variability of performance across the class. During instruction, researchers collected video of classroom activity, student science journal entries, and charts and illustrations produced by the teacher. After instruction, the first-grade students showed a more nuanced understanding of the composition and variability of soils, the role of visible organisms in decomposition, and environmental factors that influence rates of decomposition. Through a variety of representational devices, including drawings, narrative records, and physical models, students came to regard decomposition as a process, rather than simply as an end state that does not require explanation.

  19. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    Science.gov (United States)

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  20. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  1. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    Czech Academy of Sciences Publication Activity Database

    Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008

  2. Flat norm decomposition of integral currents

    Directory of Open Access Journals (Sweden)

    Sharif Ibrahim

    2016-05-01

    Full Text Available Currents represent generalized surfaces studied in geometric measure theory. They range from relatively tame integral currents representing oriented compact manifolds with boundary and integer multiplicities, to arbitrary elements of the dual space of differential forms. The flat norm provides a natural distance in the space of currents, and works by decomposing a $d$-dimensional current into $d$- and (the boundary of $(d+1$-dimensional pieces in an optimal way.Given an integral current, can we expect its at norm decomposition to be integral as well? This is not known in general, except in the case of $d$-currents that are boundaries of $(d+1$-currents in $\\mathbb{R}^{d+1}$ (following results from a corresponding problem on the $L^1$ total variation ($L^1$TV of functionals. On the other hand, for a discretized at norm on a finite simplicial complex, the analogous statement holds even when the inputs are not boundaries. This simplicial version relies on the total unimodularity of the boundary matrix of the simplicial complex; a result distinct from the $L^1$TV approach.We develop an analysis framework that extends the result in the simplicial setting to one for $d$-currents in $\\mathbb{R}^{d+1}$, provided a suitable triangulation result holds. In $\\mathbb{R}^2$, we use a triangulation result of Shewchuk (bounding both the size and location of small angles, and apply the framework to show that the discrete result implies the continuous result for $1$-currents in $\\mathbb{R}^2$ .

  3. Decomposition of heterogeneous organic matterand its long-term stabilization in soils

    Science.gov (United States)

    Sierra, Carlos A.; Harmon, Mark E.; Perakis, Steven S.

    2011-01-01

    Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils.

  4. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  5. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  6. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    Science.gov (United States)

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  7. Thermal decomposition process of silver behenate

    International Nuclear Information System (INIS)

    Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang

    2006-01-01

    The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles

  8. Radiolytic decomposition of 4-bromodiphenyl ether

    International Nuclear Information System (INIS)

    Tang Liang; Xu Gang; Wu Wenjing; Shi Wenyan; Liu Ning; Bai Yulei; Wu Minghong

    2010-01-01

    Polybrominated diphenyl ethers (PBDEs) spread widely in the environment are mainly removed by photochemical and anaerobic microbial degradation. In this paper, the decomposition of 4-bromodiphenyl ether (BDE -3), the PBDEs homologues, is investigated by electron beam irradiation of its ethanol/water solution (reduction system) and acetonitrile/water solution (oxidation system). The radiolytic products were determined by GC coupled with electron capture detector, and the reaction rate constant of e sol - in the reduction system was measured at 2.7 x 10 10 L · mol -1 · s -1 by pulsed radiolysis. The results show that the BDE-3 concentration affects strongly the decomposition ratio in the alkali solution, and the reduction system has a higher BDE-3 decomposition rate than the oxidation system. This indicates that the BDE-3 was reduced by effectively capturing e sol - in radiolytic process. (authors)

  9. Parallel processing for pitch splitting decomposition

    Science.gov (United States)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  10. Thermal Plasma decomposition of fluoriated greenhouse gases

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Soo Seok; Watanabe, Takayuki [Tokyo Institute of Technology, Yokohama (Japan); Park, Dong Wha [Inha University, Incheon (Korea, Republic of)

    2012-02-15

    Fluorinated compounds mainly used in the semiconductor industry are potent greenhouse gases. Recently, thermal plasma gas scrubbers have been gradually replacing conventional burn-wet type gas scrubbers which are based on the combustion of fossil fuels because high conversion efficiency and control of byproduct generation are achievable in chemically reactive high temperature thermal plasma. Chemical equilibrium composition at high temperature and numerical analysis on a complex thermal flow in the thermal plasma decomposition system are used to predict the process of thermal decomposition of fluorinated gas. In order to increase economic feasibility of the thermal plasma decomposition process, increase of thermal efficiency of the plasma torch and enhancement of gas mixing between the thermal plasma jet and waste gas are discussed. In addition, noble thermal plasma systems to be applied in the thermal plasma gas treatment are introduced in the present paper.

  11. Hydrogen peroxide decomposition kinetics in aquaculture water

    DEFF Research Database (Denmark)

    Arvin, Erik; Pedersen, Lars-Flemming

    2015-01-01

    during the HP decomposition. The model assumes that the enzyme decay is controlled by an inactivation stoichiometry related to the HP decomposition. In order to make the model easily applicable, it is furthermore assumed that the COD is a proxy of the active biomass concentration of the water and thereby......Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish...... reared or the nitrifying bacteria in the biofilters at concentrations required to eliminating pathogens. This calls for quantitative insight into the fate of the disinfectant residuals during water treatment. This paper presents a kinetic model that describes the HP decomposition in aquaculture water...

  12. Separable decompositions of bipartite mixed states

    Science.gov (United States)

    Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.

  13. Two Notes on Discrimination and Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Helena Skyt

    1998-01-01

    1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....

  14. Gamma ray induced decomposition of lanthanide nitrates

    International Nuclear Information System (INIS)

    Joshi, N.G.; Garg, A.N.

    1992-01-01

    Gamma ray induced decomposition of the lanthanide nitrates, Ln(NO 3 ) 3 .xH 2 O where Ln=La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Tm and Yb has been studied at different absorbed doses up to 600 kGy. G(NO 2 - ) values depend on the absorbed dose and the nature of the outer cation. It has been observed that those lanthanides which exhibit variable valency (Ce and Eu) show lower G-values. An attempt has been made to correlate thermal and radiolytic decomposition processes. (author). 20 refs., 3 figs., 1 tab

  15. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, M.J.

    1998-12-07

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variable on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB.

  16. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N; Haddad, Paul R

    2001-01-01

    The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course

  17. Basis of the biological decomposition of xenobiotica

    International Nuclear Information System (INIS)

    Mueller, R. von

    1993-01-01

    The ability of micro-organisms to decompose different molecules and to use them as a source of carbon, nitrogen, sulphur or energy is the basis for all biological processes for cleaning up contaminated soil. Therefore, the knowledge of these decomposition processes is an important precondition for judging which contamination can be treated biologically at all and which materials can be decomposed biologically. The decomposition schemes of the most important harmful material classes (aliphatic, aromatic and chlorinated hydrocarbons) are introduced and the consequences which arise for the practical application in biological cleaning up of contaminated soils are discussed. (orig.) [de

  18. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  19. Decomposition mechanisms and non-isothermal kinetics of LiHC_2O_4·H_2O

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    The thermal decomposition process of LiHC2O4·H2O from 30 to 600 ℃ was investigated by the thermogravimetric and differential scanning calorimetry (TG-DSC). The phases decomposited at different temperature were characterized by X-ray diffraction (XRD), which indicated the decompositions at 150, 170, and 420℃, relating to LiHC2O4, Li2C2O4, Li2C2O4, and Li2CO3, respectively. Reaction mechanisms in the whole sintering process were determined, and the model fitting kinetic approaches were applied to data for non...

  20. An investigation on thermal decomposition of DNTF-CMDB propellants

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)

    2007-12-15

    The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  1. Domain decomposition methods for the neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2010-01-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)

  2. Decomposition of jellyfish carrion in situ

    DEFF Research Database (Denmark)

    Chelsky, Ariella; Pitt, Kylie A.; Ferguson, Angus J.P.

    2016-01-01

    Jellyfish often form blooms that persist for weeks to months before they collapse en masse, resulting in the sudden release of large amounts of organic matter to the environment. This study investigated the biogeochemical and ecological effects of the decomposition of jellyfish in a shallow coast...

  3. Compactly supported frames for decomposition spaces

    DEFF Research Database (Denmark)

    Nielsen, Morten; Rasmussen, Kenneth Niemann

    2012-01-01

    In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...

  4. Thermal Decomposition of Aluminium Chloride Hexahydrate

    Czech Academy of Sciences Publication Activity Database

    Hartman, Miloslav; Trnka, Otakar; Šolcová, Olga

    2005-01-01

    Roč. 44, č. 17 (2005), s. 6591-6598 ISSN 0888-5885 R&D Projects: GA ČR(CZ) GA203/02/0002 Institutional research plan: CEZ:AV0Z40720504 Keywords : aluminum chloride hexahydrate * thermal decomposition * reaction kinetics Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.504, year: 2005

  5. Preparation, Structure Characterization and Thermal Decomposition ...

    African Journals Online (AJOL)

    NJD

    Decomposition Process of the Dysprosium(III) m-Methylbenzoate 1 ... A dinuclear complex [Dy(m-MBA)3phen]2·H2O was prepared by the reaction of DyCl3·6H2O, m-methylbenzoic acid and .... ing rate of 10 °C min–1 are illustrated in Fig. 4.

  6. A decomposition of pairwise continuity via ideals

    Directory of Open Access Journals (Sweden)

    Mahes Wari

    2016-02-01

    Full Text Available In this paper, we introduce and study the notions of (i, j - regular - ℐ -closed sets, (i, j - Aℐ -sets, (i, j - ℐ -locally closed sets, p- Aℐ -continuous functions and p- ℐ -LC-continuous functions in ideal bitopological spaces and investigate some of their properties. Also, a new decomposition of pairwise continuity is obtained using these sets.

  7. Nested grids ILU-decomposition (NGILU)

    NARCIS (Netherlands)

    Ploeg, A. van der; Botta, E.F.F.; Wubs, F.W.

    1996-01-01

    A preconditioning technique is described which shows, in many cases, grid-independent convergence. This technique only requires an ordering of the unknowns based on the different levels of multigrid, and an incomplete LU-decomposition based on a drop tolerance. The method is demonstrated on a

  8. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...

  9. Triboluminescence and associated decomposition of solid methanol

    International Nuclear Information System (INIS)

    Trout, G.J.; Moore, D.E.; Hawke, J.G.

    1975-01-01

    The decomposition is initiated by the cooling of solid methanol through the β → α transiRon at 157.8K, producing the gases hydrogen, carbon monoxide, and methane. The passage through this lambda transition causes the breakup of large crystals of β-methanol into crystallites of α-methanol and is accompanied by light emission as well as decomposition. This triboluminescence is accompanied by, and apparently produced by, electrical discharges through methanol vapor in the vicinity of the solid. The potential differences needed to produce the electrical breakdown of the methanol vapor apparently arise from the disruption of the long hydrogen bonded chains of methanol molecules present in crystalline methanol. Charge separation following crystal deformation is a characteristic of substances which exhibit gas discharge triboluminescence; solid methanol has been found to emit such luminescence when mechanically deformed in the absence of the β → α transition The decomposition products are not produced directly by the breaking up of the solid methanol but from the vapor phase methanol by the electrical discharges. That gas phase decomposition does occur was confirmed by observing that the vapors of C 2 H 5 OH, CH 3 OD, and CD 3 OD decompose on being admitted to a vessel containing methanol undergoing the β → α phase transition. (U.S.)

  10. On Orthogonal Decomposition of a Sobolev Space

    OpenAIRE

    Lakew, Dejenie A.

    2016-01-01

    The theme of this short article is to investigate an orthogonal decomposition of a Sobolev space and look at some properties of the inner product therein and the distance defined from the inner product. We also determine the dimension of the orthogonal difference space and show the expansion of spaces as their regularity increases.

  11. TP89 - SIRZ Decomposition Spectral Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  12. Methodologies in forensic and decomposition microbiology

    Science.gov (United States)

    Culturable microorganisms represent only 0.1-1% of the total microbial diversity of the biosphere. This has severely restricted the ability of scientists to study the microbial biodiversity associated with the decomposition of ephemeral resources in the past. Innovations in technology are bringing...

  13. Organic matter decomposition in simulated aquaculture ponds

    NARCIS (Netherlands)

    Torres Beristain, B.

    2005-01-01

    Different kinds of organic and inorganic compounds (e.g. formulated food, manures, fertilizers) are added to aquaculture ponds to increase fish production. However, a large part of these inputs are not utilized by the fish and are decomposed inside the pond. The microbiological decomposition of the

  14. Wood decomposition as influenced by invertebrates

    Science.gov (United States)

    Michael D. Ulyshen

    2014-01-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial...

  15. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  16. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  17. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  18. Linear, Constant-rounds Bit-decomposition

    DEFF Research Database (Denmark)

    Reistad, Tord; Toft, Tomas

    2010-01-01

    When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ M...

  19. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Jae-Hyung Yoo; Eung-Ho Kim

    1999-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  20. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    Science.gov (United States)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  1. Radiolytic decomposition of dioxins in liquid wastes

    International Nuclear Information System (INIS)

    Zhao Changli; Taguchi, M.; Hirota, K.; Takigami, M.; Kojima, T.

    2006-01-01

    The dioxins including polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are some of the most toxic persistent organic pollutants. These chemicals have widely contaminated the air, water, and soil. They would accumulate in the living body through the food chains, leading to a serious public health hazard. In the present study, radiolytic decomposition of dioxins has been investigated in liquid wastes, including organic waste and waste-water. Dioxin-containing organic wastes are commonly generated in nonane or toluene. However, it was found that high radiation doses are required to completely decompose dioxins in the two solvents. The decomposition was more efficient in ethanol than in nonane or toluene. The addition of ethanol to toluene or nonane could achieve >90% decomposition of dioxins at the dose of 100 kGy. Thus, dioxin-containing organic wastes can be treated as regular organic wastes after addition of ethanol and subsequent γ-ray irradiation. On the other hand, radiolytic decomposition of dioxins easily occurred in pure-water than in waste-water, because the reaction species is largely scavenged by the dominant organic materials in waste-water. Dechlorination was not a major reaction pathway for the radiolysis of dioxin in water. In addition, radiolytic mechanism and dechlorinated pathways in liquid wastes were also discussed. (authors)

  2. Strongly \\'etale difference algebras and Babbitt's decomposition

    OpenAIRE

    Tomašić, Ivan; Wibmer, Michael

    2015-01-01

    We introduce a class of strongly \\'{e}tale difference algebras, whose role in the study of difference equations is analogous to the role of \\'{e}tale algebras in the study of algebraic equations. We deduce an improved version of Babbitt's decomposition theorem and we present applications to difference algebraic groups and the compatibility problem.

  3. Thermal decomposition of barium valerate in argon

    DEFF Research Database (Denmark)

    Torres, P.; Norby, Poul; Grivel, Jean-Claude

    2015-01-01

    The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...

  4. A Systolic Architecture for Singular Value Decomposition,

    Science.gov (United States)

    1983-01-01

    Presented at the 1 st International Colloquium on Vector and Parallel Computing in Scientific Applications, Paris, March 191J Contract N00014-82-K.0703...Gene Golub. Private comunication . given inputs x and n 2 , compute 2 2 2 2 /6/ G. H. Golub and F. T. Luk : "Singular Value I + X1 Decomposition

  5. Direct observation of nanowire growth and decomposition

    DEFF Research Database (Denmark)

    Rackauskas, Simas; Shandakov, Sergey D; Jiang, Hua

    2017-01-01

    knowledge, so far this has been only postulated, but never observed at the atomic level. By means of in situ environmental transmission electron microscopy we monitored and examined the atomic layer transformation at the conditions of the crystal growth and its decomposition using CuO nanowires selected...

  6. Nash-Williams’ cycle-decomposition theorem

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2016-01-01

    We give an elementary proof of the theorem of Nash-Williams that a graph has an edge-decomposition into cycles if and only if it does not contain an odd cut. We also prove that every bridgeless graph has a collection of cycles covering each edge at least once and at most 7 times. The two results...

  7. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  8. Reference-tracking feedforward control design for linear dynamical systems through signal decomposition

    NARCIS (Netherlands)

    Kasemsinsup, Y.; Romagnoli, R.; Heertjes, M.F.; Weiland, S.; Butler, H.

    2017-01-01

    In this work, we study a novel approach towards the reference-tracking feedforward control design for linear dynamical systems. By utilizing the superposition property and exploiting signal decomposition together with a quadratic optimization process, we obtain a feedforward design procedure for

  9. Appling Laplace Adomian decomposition method for delay differential equations with boundary value problems

    Science.gov (United States)

    Yousef, Hamood Mohammed; Ismail, Ahmad Izani

    2017-11-01

    In this paper, Laplace Adomian decomposition method (LADM) was applied to solve Delay differential equations with Boundary Value Problems. The solution is in the form of a convergent series which is easy to compute. This approach is tested on two test problem. The findings obtained exhibit the reliability and efficiency of the proposed method.

  10. DFT calculations on N2O decomposition by binuclear Fe complexes in Fe/ZSM-5

    NARCIS (Netherlands)

    Yakovlev, A.L.; Zhidomirov, G.M.; Santen, van R.A.

    2001-01-01

    N2O decomposition catalyzed by oxidized Fe clusters localized in the micropores of Fe/ZSM-5 has been studied using the DFT approach and a binuclear cluster model of the active site. Three different reaction routes were found, depending on temperature and water pressure. The results show that below

  11. Quantitative lung perfusion evaluation using Fourier decomposition perfusion MRI.

    Science.gov (United States)

    Kjørstad, Åsmund; Corteville, Dominique M R; Fischer, Andre; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R

    2014-08-01

    To quantitatively evaluate lung perfusion using Fourier decomposition perfusion MRI. The Fourier decomposition (FD) method is a noninvasive method for assessing ventilation- and perfusion-related information in the lungs, where the perfusion maps in particular have shown promise for clinical use. However, the perfusion maps are nonquantitative and dimensionless, making follow-ups and direct comparisons between patients difficult. We present an approach to obtain physically meaningful and quantifiable perfusion maps using the FD method. The standard FD perfusion images are quantified by comparing the partially blood-filled pixels in the lung parenchyma with the fully blood-filled pixels in the aorta. The percentage of blood in a pixel is then combined with the temporal information, yielding quantitative blood flow values. The values of 10 healthy volunteers are compared with SEEPAGE measurements which have shown high consistency with dynamic contrast enhanced-MRI. All pulmonary blood flow (PBF) values are within the expected range. The two methods are in good agreement (mean difference = 0.2 mL/min/100 mL, mean absolute difference = 11 mL/min/100 mL, mean PBF-FD = 150 mL/min/100 mL, mean PBF-SEEPAGE = 151 mL/min/100 mL). The Bland-Altman plot shows a good spread of values, indicating no systematic bias between the methods. Quantitative lung perfusion can be obtained using the Fourier Decomposition method combined with a small amount of postprocessing. Copyright © 2013 Wiley Periodicals, Inc.

  12. Wood decomposition as influenced by invertebrates.

    Science.gov (United States)

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  13. The Slice Algorithm For Irreducible Decomposition of Monomial Ideals

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2009-01-01

    Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...

  14. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.

    2016-01-01

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former

  15. Decomposition Methods For a Piv Data Analysis with Application to a Boundary Layer Separation Dynamics

    Directory of Open Access Journals (Sweden)

    Václav URUBA

    2010-12-01

    Full Text Available Separation of the turbulent boundary layer (BL on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition and its extension BOD (Bi-Orthogonal Decomposition techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns method. The study contributes to understanding physical mechanisms of a boundary layer separation process. The acquired information could be used to improve strategies of a boundary layer separation control.

  16. Thermal decomposition of γ-irradiated lead nitrate

    International Nuclear Information System (INIS)

    Nair, S.M.K.; Kumar, T.S.S.

    1990-01-01

    The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs

  17. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  18. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  19. Applications of tensor (multiway array) factorizations and decompositions in data mining

    DEFF Research Database (Denmark)

    Mørup, Morten

    2011-01-01

    Tensor (multiway array) factorization and decomposition has become an important tool for data mining. Fueled by the computational power of modern computer researchers can now analyze large-scale tensorial structured data that only a few years ago would have been impossible. Tensor factorizations...... have several advantages over two-way matrix factorizations including uniqueness of the optimal solution and component identification even when most of the data is missing. Furthermore, multiway decomposition techniques explicitly exploit the multiway structure that is lost when collapsing some...... of the modes of the tensor in order to analyze the data by regular matrix factorization approaches. Multiway decomposition is being applied to new fields every year and there is no doubt that the future will bring many exciting new applications. The aim of this overview is to introduce the basic concepts...

  20. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  1. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  2. In situ XAS of the solvothermal decomposition of dithiocarbamate complexes

    NARCIS (Netherlands)

    Islam, H.-U.; Roffey, A.; Hollingsworth, N.; Catlow, R.; Wolthers, M.; de Leeuw, N.H.; Bras, W.; Sankar, G.; Hogarth, G.

    2012-01-01

    An in situ XAS study of the solvothermal decomposition of iron and nickel dithiocarbamate complexes was performed in order to gain understanding of the decomposition mechanisms. This work has given insight into the steps involved in the decomposition, showing variation in reaction pathways between

  3. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  4. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  5. An Approach to Operational Analysis: Doctrinal Task Decomposition

    Science.gov (United States)

    2016-08-04

    Once the unit is selected , CATS will output all of the doctrinal collective tasks associated with the unit. Currently, CATS outputs this information...Army unit are controlled data items, but for explanation purposes consider this simple example using a restaurant as the unit of interest. Table 1...shows an example Task Model for a restaurant using language and format similar to what CATS provides. Only 3 levels are shown in the example, but

  6. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed; Ghanem, Bernard; Wonka, Peter

    2015-01-01

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  7. 3D shape decomposition and comparison for gallbladder modeling

    Science.gov (United States)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  8. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed

    2015-12-07

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  9. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  10. Self-decomposition of radiochemicals. Principles, control, observations and effects

    International Nuclear Information System (INIS)

    Evans, E.A.

    1976-01-01

    The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)

  11. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  12. hermal decomposition of irradiated casein molecules

    International Nuclear Information System (INIS)

    Ali, M.A.; Elsayed, A.A.

    1998-01-01

    NON-Isothermal studies were carried out using the derivatograph where thermogravimetry (TG) and differential thermogravimetry (DTG) measurements were used to obtain the activation energies of the first and second reactions for casein (glyco-phospho-protein) decomposition before and after exposure to 1 Gy γ-rays and up to 40 x 1 04 μg Gy fast neutrons. 25C f was used as a source of fast neutrons, associated with γ-rays. 137 Cs source was used as pure γ-source. The activation energies for the first and second reactions for casein decomposition were found to be smaller at 400 μGy than that at lower and higher fast neutron doses. However, no change in activation energies was observed after γ-irradiation. it is concluded from the present study that destruction of casein molecules by low level fast neutron doses may lead to changes of shelf storage period of milk

  13. Investigation into kinetics of decomposition of nitrates

    International Nuclear Information System (INIS)

    Belov, B.A.; Gorozhankin, Eh.V.; Efremov, V.N.; Sal'nikova, N.S.; Suris, A.L.

    1985-01-01

    Using the method of thermogravimetry, the decomposition of nitrates, Cd(NO 3 ) 2 x4H 2 O, La(NO 3 ) 2 x6H 2 O, Sr(NO 3 ) 2 , ZrO(NO 3 ) 2 x2H 2 O, Y(NO 3 ) 3 x6H 2 O, in particular, is studied in the 20-1000 deg C range. It is shown, that gaseous pyrolysis, products, remaining in the material, hamper greatly the heat transfer required for the decomposition which reduces the reaction order. An effective activation energy of the process is in a satisfactory agreement with the characteristic temperature of the last endotherm. Kinetic parameters are calculated by the minimization method using a computer

  14. Vertically-oriented graphenes supported Mn3O4 as advanced catalysts in post plasma-catalysis for toluene decomposition

    Science.gov (United States)

    Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa

    2018-04-01

    This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.

  15. Thermal decomposition kinetics of ammonium uranyl carbonate

    International Nuclear Information System (INIS)

    Kim, E.H.; Park, J.J.; Park, J.H.; Chang, I.S.; Choi, C.S.; Kim, S.D.

    1994-01-01

    The thermal decomposition kinetics of AUC [ammonium uranyl carbonate; (NH 4 ) 4 UO 2 (CO 3 ) 3 [ in an isothermal thermogravimetric (TG) reactor under N 2 atmosphere has been determined. The kinetic data can be represented by the two-dimensional nucleation and growth model. The reaction rate increases and activation energy decreases with increasing particle size and precipitation time which appears in the particle size larger than 30 μm in the mechano-chemical phenomena. (orig.)

  16. Radiation decomposition of technetium-99m radiopharmaceuticals

    International Nuclear Information System (INIS)

    Billinghurst, M.W.; Rempel, S.; Westendorf, B.A.

    1979-01-01

    Technetium-99m radiopharmaceuticals are shown to be subject to autoradiation-induced decomposition, which results in increasing abundance of pertechnetate in the preparation. This autodecomposition is catalyzed by the presence of oxygen, although the removal of oxygen does not prevent its occurrence. The initial appearance of pertechnetate in the radiopharmaceutical is shown to be a function of the amount of radioactivity, the quantity of stannous ion used, and the ratio of /sup 99m/Tc to total technetium in the preparation

  17. Information decomposition method to analyze symbolical sequences

    International Nuclear Information System (INIS)

    Korotkov, E.V.; Korotkova, M.A.; Kudryashov, N.A.

    2003-01-01

    The information decomposition (ID) method to analyze symbolical sequences is presented. This method allows us to reveal a latent periodicity of any symbolical sequence. The ID method is shown to have advantages in comparison with application of the Fourier transformation, the wavelet transform and the dynamic programming method to look for latent periodicity. Examples of the latent periods for poetic texts, DNA sequences and amino acids are presented. Possible origin of a latent periodicity for different symbolical sequences is discussed

  18. Domain decomposition methods for fluid dynamics

    International Nuclear Information System (INIS)

    Clerc, S.

    1995-01-01

    A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs

  19. Domain decomposition multigrid for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Yair

    1997-01-01

    A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.

  20. Decomposition of monolithic web application to microservices

    OpenAIRE

    Zaymus, Mikulas

    2017-01-01

    Solteq Oyj has an internal Wellbeing project for massage reservations. The task of this thesis was to transform the monolithic architecture of this application to microservices. The thesis starts with a detailed comparison between microservices and monolithic application. It points out the benefits and disadvantages microservice architecture can bring to the project. Next, it describes the theory and possible strategies that can be used in the process of decomposition of an existing monoli...

  1. Numerical CP Decomposition of Some Difficult Tensors

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Phan, A. H.; Cichocki, A.

    2017-01-01

    Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385. pdf

  2. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  3. Nonconformity problem in 3D Grid decomposition

    Czech Academy of Sciences Publication Activity Database

    Kolcun, Alexej

    2002-01-01

    Roč. 10, č. 1 (2002), s. 249-253 ISSN 1213-6972. [International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002/10./. Plzeň, 04.02.2002-08.02.2002] R&D Projects: GA ČR GA105/99/1229; GA ČR GA105/01/1242 Institutional research plan: CEZ:AV0Z3086906 Keywords : structured mesh * decomposition * nonconformity Subject RIV: BA - General Mathematics

  4. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  5. Decomposition and reduction of AUC in hydrogen

    International Nuclear Information System (INIS)

    Ge Qingren; Kang Shifang; Zhou Meng

    1987-01-01

    AUC (Ammonium Uranyl Carbonate) conversion processes have been adopted extensively in nuclear fuel cycle. The kinetics investigation of these processes, however, has not yet been reported in detail at the published literatures. In the present work, the decomposition kinetics of AUC in hydrogen has been determined by non-isothermal method. DSC curves are solved with computer by Ge Qingren method. The results show that the kinetics obeys Avrami-Erofeev equation within 90% conversion. The apparent activation energy and preexponent are found to be 113.0 kJ/mol and 7.11 x 10 11 s -1 respectively. The reduction kinetics of AUC decomposition product in hydrogen at the range of 450 - 600 deg C has been determined by isothermal thermogravimetric method. The results show that good linear relationship can be obtained from the plot of conversion vs time, and that the apparent activation energy is found to be 113.9 kJ/mol. The effects of particle size and partial pressure of hydrogen are examined in reduction of AUC decomposition product. The reduction mechanism and the structure of particle are discussed according to the kinetics behaviour and SEM (scanning electron microscope) photograph

  6. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Yoo, J.H.; Kim, E.H.

    1998-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, the photochemical decomposition mechanism of oxalates in the presence of nitric acid was elucidated by experimental work. The decomposition of oxalates was proved to be dominated by the reaction with hydroxyl radical generated from the nitric acid, rather than with nitrite ion also formed from nitrate ion. The decomposition rate of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was found to be 0.003 M/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  7. Thermal decomposition and reaction of confined explosives

    International Nuclear Information System (INIS)

    Catalano, E.; McGuire, R.; Lee, E.; Wrenn, E.; Ornellas, D.; Walton, J.

    1976-01-01

    Some new experiments designed to accurately determine the time interval required to produce a reactive event in confined explosives subjected to temperatures which will cause decomposition are described. Geometry and boundary conditions were both well defined so that these experiments on the rapid thermal decomposition of HE are amenable to predictive modelling. Experiments have been carried out on TNT, TATB and on two plastic-bonded HMX-based high explosives, LX-04 and LX-10. When the results of these experiments are plotted as the logarithm of the time to explosion versus 1/T K (Arrhenius plot), the curves produced are remarkably linear. This is in contradiction to the results obtained by an iterative solution of the Laplace equation for a system with a first order rate heat source. Such calculations produce plots which display considerable curvature. The experiments have also shown that the time to explosion is strongly influenced by the void volume in the containment vessel. Results of the experiments with calculations based on the heat flow equations coupled with first-order models of chemical decomposition are compared. The comparisons demonstrate the need for a more realistic reaction model

  8. Gas hydrates forming and decomposition conditions analysis

    Directory of Open Access Journals (Sweden)

    А. М. Павленко

    2017-07-01

    Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.

  9. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    Science.gov (United States)

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  10. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    Science.gov (United States)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  11. Molecular tailoring approach for exploring structures, energetics and ...

    Indian Academy of Sciences (India)

    Administrator

    Keywords. Molecular clusters; linear scaling methods; molecular tailoring approach (MTA); Hartree– ..... energy decomposition analysis also performed and which clearly ... through molecular dynamics simulation furnished by. Takeguchi,. 46.

  12. Generalized first-order kinetic model for biosolids decomposition and oxidation during hydrothermal treatment.

    Science.gov (United States)

    Shanableh, A

    2005-01-01

    The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.

  13. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  14. Europlexus: a domain decomposition method in explicit dynamics

    International Nuclear Information System (INIS)

    Faucher, V.; Hariddh, Bung; Combescure, A.

    2003-01-01

    Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)

  15. Decomposition of business process models into reusable sub-diagrams

    Directory of Open Access Journals (Sweden)

    Wiśniewski Piotr

    2017-01-01

    Full Text Available In this paper, an approach to automatic decomposition of business process models is proposed. According to our method, an existing BPMN diagram is disassembled into reusable parts containing the desired number of elements. Such elements and structure can work as design patterns and be validated by a user in terms of correctness. In the next step, these component models are categorised considering their parameters such as resources used, as well as input and output data. The classified components may be considered a repository of reusable parts, that can be further applied in the design of new models. The proposed technique may play a significant role in facilitating the business process redesign procedure, which is of a great importance regarding engineering and industrial applications.

  16. The proper generalized decomposition for advanced numerical simulations a primer

    CERN Document Server

    Chinesta, Francisco; Leygue, Adrien

    2014-01-01

    Many problems in scientific computing are intractable with classical numerical techniques. These fail, for example, in the solution of high-dimensional models due to the exponential increase of the number of degrees of freedom. Recently, the authors of this book and their collaborators have developed a novel technique, called Proper Generalized Decomposition (PGD) that has proven to be a significant step forward. The PGD builds by means of a successive enrichment strategy a numerical approximation of the unknown fields in a separated form. Although first introduced and successfully demonstrated in the context of high-dimensional problems, the PGD allows for a completely new approach for addressing more standard problems in science and engineering. Indeed, many challenging problems can be efficiently cast into a multi-dimensional framework, thus opening entirely new solution strategies in the PGD framework. For instance, the material parameters and boundary conditions appearing in a particular mathematical mod...

  17. Task decomposition for multilimbed robots to work in the reachable-but-unorientable space

    Science.gov (United States)

    Su, Chao; Zheng, Yuan F.

    1990-01-01

    Multilimbed industrial robots that have at least one arm and two or more legs are suggested for enlarging robot workspace in industrial automation. To plan the motion of a multilimbed robot, the arm-leg motion-coordination problem is raised and task decomposition is proposed to solve the problem; that is, a given task described by the destination position and orientation of the end-effector is decomposed into subtasks for arm manipulation and for leg locomotion, respectively. The former is defined as the end-effector position and orientation with respect to the legged main body, and the latter as the main-body position and orientation in the world coordinates. Three approaches are proposed for the task decomposition. The approaches are further evaluated in terms of energy consumption, from which an optimal approach can be selected.

  18. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    Science.gov (United States)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  19. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  20. South Africa's electricity consumption: A sectoral decomposition analysis

    International Nuclear Information System (INIS)

    Inglesi-Lotz, Roula; Blignaut, James N.

    2011-01-01

    Highlights: → We conduct a decomposition exercise of the South African electricity consumption. → The increase in electricity consumption was due to output and structural changes. → The increasing at a low rate electricity intensity was a decreasing factor to consumption. → Increases in production were proven to be part of the rising trend for all sectors. → Only 5 sectors' consumption were negatively affected by efficiency improvements. -- Abstract: South Africa's electricity consumption has shown a sharp increase since the early 1990s. Here we conduct a sectoral decomposition analysis of the electricity consumption for the period 1993-2006 to determine the main drivers responsible for this increase. The results show that the increase was mainly due to output or production related factors, with structural changes playing a secondary role. While there is some evidence of efficiency improvements, indicated here as a slowdown in the rate of increase of electricity intensity, it was not nearly sufficient to offset the combined production and structural effects that propelled electricity consumption forward. This general economy-wide statement, however, can be misleading since the results, in essence, are very sector specific and the inter-sectoral differences are substantial. Increases in production were proven to be part of the rising trend for all sectors. However, only five out of fourteen sectors were affected by efficiency improvements, while the structural changes affected the sectors' electricity consumption in different ways. These differences concerning the production, structural and efficiency effects on the sectors indicate the need for a sectoral approach in the energy policy-making of the country rather than a blanket or unilateral economy-wide approach.

  1. Health monitoring of pipeline girth weld using empirical mode decomposition

    Science.gov (United States)

    Rezaei, Davood; Taheri, Farid

    2010-05-01

    In the present paper the Hilbert-Huang transform (HHT), as a time-series analysis technique, has been combined with a local diagnostic approach in an effort to identify flaws in pipeline girth welds. This method is based on monitoring the free vibration signals of the pipe at its healthy and flawed states, and processing the signals through the HHT and its associated signal decomposition technique, known as empirical mode decomposition (EMD). The EMD method decomposes the vibration signals into a collection of intrinsic mode functions (IMFs). The deviations in structural integrity, measured from a healthy-state baseline, are subsequently evaluated by two damage sensitive parameters. The first is a damage index, referred to as the EM-EDI, which is established based on an energy comparison of the first or second IMF of the vibration signals, before and after occurrence of damage. The second parameter is the evaluation of the lag in instantaneous phase, a quantity derived from the HHT. In the developed methodologies, the pipe's free vibration is monitored by piezoceramic sensors and a laser Doppler vibrometer. The effectiveness of the proposed techniques is demonstrated through a set of numerical and experimental studies on a steel pipe with a mid-span girth weld, for both pressurized and nonpressurized conditions. To simulate a crack, a narrow notch is cut on one side of the girth weld. Several damage scenarios, including notches of different depths and at various locations on the pipe, are investigated. Results from both numerical and experimental studies reveal that in all damage cases the sensor located at the notch vicinity could successfully detect the notch and qualitatively predict its severity. The effect of internal pressure on the damage identification method is also monitored. Overall, the results are encouraging and promise the effectiveness of the proposed approaches as inexpensive systems for structural health monitoring purposes.

  2. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  3. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    Science.gov (United States)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  4. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  5. Thermodynamic anomaly in magnesium hydroxide decomposition

    International Nuclear Information System (INIS)

    Reis, T.A.

    1983-08-01

    The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH) 2 (s) = MgO(s) + H 2 O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10 -4 of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH) 2 used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH) 2 are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400 0 C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH) 2 -MgO solid solution during continuous thermal decomposition in Knudsen cells

  6. Methyl Iodide Decomposition at BWR Conditions

    International Nuclear Information System (INIS)

    Pop, Mike; Bell, Merl

    2012-09-01

    Based on favourable results from short-term testing of methanol addition to an operating BWR plant, AREVA has performed numerous studies in support of necessary Engineering and Plant Safety Evaluations prior to extended injection of methanol. The current paper presents data from a study intended to provide further understanding of the decomposition of methyl iodide as it affects the assessment of methyl iodide formation with the application of methanol at BWR Plants. This paper describes the results of the decomposition testing under UV-C light at laboratory conditions and its effect on the subject methyl iodide production evaluation. The study as to the formation and decomposition of methyl iodide as it is effected by methanol addition is one phase of a larger AREVA effort to provide a generic plant Safety Evaluation prior to long-term methanol injection to an operating BWR. Other testing phases have investigated the compatibility of methanol with fuel construction materials, plant structural materials, plant consumable materials (i.e. elastomers and coatings), and ion exchange resins. Methyl iodide is known to be very unstable, typically preserved with copper metal or other stabilizing materials when produced and stored. It is even more unstable when exposed to light, heat, radiation, and water. Additionally, it is known that methyl iodide will decompose radiolytically, and that this effect may be simulated using ultra-violet radiation (UV-C) [2]. In the tests described in this paper, the use of a UV-C light source provides activation energy for the formation of methyl iodide. Thus is similar to the effect expected from Cherenkov radiation present in a reactor core after shutdown. Based on the testing described in this paper, it is concluded that injection of methanol at concentrations below 2.5 ppm in BWR applications to mitigate IGSCC of internals is inconsequential to the accident conditions postulated in the FSAR as they are related to methyl iodide formation

  7. Task Decomposition Module For Telerobot Trajectory Generation

    Science.gov (United States)

    Wavering, Albert J.; Lumia, Ron

    1988-10-01

    A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.

  8. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  9. Decomposition kinetics of aminoborane in aqueous solutions

    International Nuclear Information System (INIS)

    Shvets, I.B.; Erusalimchik, I.G.

    1984-01-01

    Kinetics of aminoborane hydrolysis has been studied using the method of polarization galvanostatical curves on a platinum electrode in buffer solutions at pH 3; 5; 7. The supposition that the reaction of aminoborane hydrolysis is the reaction of the first order by aminoborane is proved. The rate constant of aminoborane decomposition in the solution with pH 5 is equal to: K=2.5x10 -5 s -1 and with pH 3 it equals K=1.12x10 -4 s -1

  10. Thermal decomposition of uranyl sulphate hydrate

    International Nuclear Information System (INIS)

    Sato, T.; Ozawa, F.; Ikoma, S.

    1980-01-01

    The thermal decomposition of uranyl sulphate hydrate (UO 2 SO 4 .3H 2 O) has been investigated by thermogravimetry, differential thermal analysis, X-ray diffraction and infrared spectrophotometry. As a result, it is concluded that uranyl sulphate hydrate decomposes thermally: UO 2 SO 4 .3H 2 O → UO 2 SO 4 .xH 2 O(2.5 = 2 SO 4 . 2H 2 O → UO 2 SO 4 .H 2 O → UO 2 SO 4 → α-UO 2 SO 4 → β-UO 2 SO 4 → U 3 O 8 . (author)

  11. Cellulose decomposition in a 50 MVA transformer

    International Nuclear Information System (INIS)

    Piechalak, B.W.

    1992-01-01

    Dissolved gas-in-oil analysis for carbon monoxide and carbon dioxide has been used for years to predict cellulose decomposition in a transformer. However, the levels at which these gases become significant have not been widely agreed upon. This paper evaluates the gas analysis results from the nitrogen blanket and the oil of a 50 MVA unit auxiliary transformer in terms of whether accelerated thermal breakdown or normal aging of the paper is occurring. Furthermore, this paper presents additional data on carbon monoxide and carbon dioxide levels in unit and system auxiliary transformers at generating stations and explains why their levels differ

  12. Observation of spinodal decomposition in nuclei?

    International Nuclear Information System (INIS)

    Guarnera, A.; Colonna, M.; Chomaz, Ph.

    1996-01-01

    In the framework of the recently developed stochastic one-body descriptions it has been shown that the occurrence of nuclear multifragmentation by spinodal decomposition is characterized by typical size and time scales; in particular, the formation of nearly equal mass fragments is expected around Z=10. A first preliminary comparison of our predictions with experimental data for the Xe + Cu at 45 MeV/A and for Xe + Sn at 50 MeV/A recently measured by the Indra collaboration is presented. The agreement of the results with the data seems finally to plead in favour of a possible occurrence of a first order phase transition. (K.A.)

  13. Foreword - Acid decomposition of borosilicate ores

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Kurbonov, A.S.; Mamatov, E.D.

    2015-01-01

    The elaboration and development of technology of processing of mineral raw materials have an important role for the industry of Tajikistan. The results of researches of the staff of Institute of Chemistry and Nuclear and Radiation Safety Agency of the Republic of Tajikistan were considered in present monograph. The physicochemical and technological aspects of processing of borosilicate raw materials of Ak-Arkhar Deposit of Tajikistan were considered. The necessary conditions of acid decomposition of raw materials were defined. The flowsheets for the processing of boron raw materials were proposed.

  14. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N

    1992-01-01

    This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos

  15. Fringe pattern denoising via image decomposition.

    Science.gov (United States)

    Fu, Shujun; Zhang, Caiming

    2012-02-01

    Filtering off noise from a fringe pattern is one of the key tasks in optical interferometry. In this Letter, using some suitable function spaces to model different components of a fringe pattern, we propose a new fringe pattern denoising method based on image decomposition. In our method, a fringe image is divided into three parts: low-frequency fringe, high-frequency fringe, and noise, which are processed in different spaces. An adaptive threshold in wavelet shrinkage involved in this algorithm improves its denoising performance. Simulation and experimental results show that our algorithm obtains smooth and clean fringes with different frequencies while preserving fringe features effectively.

  16. Memory effect and fast spinodal decomposition

    International Nuclear Information System (INIS)

    Koide, T.; Krein, G.; Ramos, Rudnei O.

    2007-01-01

    We consider the modification of the Cahn-Hilliard equation when a time delay process through a memory function is taken into account. We then study the process of spinodal decomposition in fast phase transitions associated with a conserved order parameter. The introduced memory effect plays an important role to obtain a finite group velocity. Then, we discuss the constraint for the parameters to satisfy causality. The memory effect is seen to affect the dynamics of phase transition at short times and have the effect of delaying, in a significant way, the process of rapid growth of the order parameter that follows a quench into the spinodal region. (author)

  17. Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films

    International Nuclear Information System (INIS)

    Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.

    2013-01-01

    We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films

  18. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  19. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  20. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  1. A handbook of decomposition methods in analytical chemistry

    International Nuclear Information System (INIS)

    Bok, R.

    1984-01-01

    Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references

  2. Crop residue decomposition in Minnesota biochar amended plots

    OpenAIRE

    S. L. Weyers; K. A. Spokas

    2014-01-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with ...

  3. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    International Nuclear Information System (INIS)

    Barnes, M.J.; Peterson, R.A.

    1998-04-01

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variables on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB. These tests showed the following.The testing demonstrates that current facility configuration does not provide assured safety of operations relative to the hazards of benzene (in particular to maintain the tank headspace below 60 percent of the lower flammability limit (lfl) for benzene generation rates of greater than 7 mg/(L.h)) from possible accelerated reaction of excess NaTPB. Current maximal operating temperatures of 40 degrees C and the lack of protection against palladium entering Tank 48H provide insufficient protection against the onset of the reaction. Similarly, control of the amount of excess NaTPB, purification of the organic, or limiting the benzene content of the slurry (via stirring) and ionic strength of the waste mixture prove inadequate to assure safe operation

  4. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, M.J. [Westinghouse Savannah River Company, AIKEN, SC (United States); Peterson , R.A.

    1998-04-01

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variables on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB. These tests showed the following.The testing demonstrates that current facility configuration does not provide assured safety of operations relative to the hazards of benzene (in particular to maintain the tank headspace below 60 percent of the lower flammability limit (lfl) for benzene generation rates of greater than 7 mg/(L.h)) from possible accelerated reaction of excess NaTPB. Current maximal operating temperatures of 40 degrees C and the lack of protection against palladium entering Tank 48H provide insufficient protection against the onset of the reaction. Similarly, control of the amount of excess NaTPB, purification of the organic, or limiting the benzene content of the slurry (via stirring) and ionic strength of the waste mixture prove inadequate to assure safe operation.

  5. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  6. Formation and decomposition of ammoniated ammonium ions

    International Nuclear Information System (INIS)

    Ikezoe, Yasumasa; Suzuki, Kazuya; Nakashima, Mikio; Yokoyama, Atsushi; Shiraishi, Hirotsugu; Ohno, Shin-ichi

    1998-09-01

    Structures, frequencies, and chemical reactions of ammoniated ammonium ions (NH 4 + .nNH 3 ) were investigated theoretically by ab initio molecular orbital calculations and experimentally by observing their formation and decomposition in a corona discharge-jet expansion process. The ab initio calculations were carried out using a Gaussian 94 program, which gave optimized structures, binding energies and harmonic vibrational frequencies of NH 4 + .nNH 3 . Effects of discharge current, the reactant gas and the diameter of the gas expanding pinhole were examined on the size n distribution of NH 4 + .nNH 3 . The results indicated that the cluster ion, in the jet expansion process, grew in size mostly equal to or less than one unit under experimental conditions employed. Effects of discharge current, pinhole diameter, flight time in vacuum and cluster size were examined on the decomposition rate of cluster ions formed. In our experimental conditions, the internal energies of cluster ions were mainly determined through exo- and/or endo-thermic reactions involved in the cluster formation process. (author)

  7. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  8. DECOMPOSITION OF MANUFACTURING PROCESSES: A REVIEW

    Directory of Open Access Journals (Sweden)

    N.M.Z.N. Mohamed

    2012-06-01

    Full Text Available Manufacturing is a global activity that started during the industrial revolution in the late 19th century to cater for the large-scale production of products. Since then, manufacturing has changed tremendously through the innovations of technology, processes, materials, communication and transportation. The major challenge facing manufacturing is to produce more products using less material, less energy and less involvement of labour. To face these challenges, manufacturing companies must have a strategy and competitive priority in order for them to compete in a dynamic market. A review of the literature on the decomposition of manufacturing processes outlines three main processes, namely: high volume, medium volume and low volume. The decomposition shows that each sub process has its own characteristics and depends on the nature of the firm’s business. Two extreme processes are continuous line production (fast extreme and project shop (slow extreme. Other processes are in between these two extremes of the manufacturing spectrum. Process flow patterns become less complex with cellular, line and continuous flow compared with jobbing and project. The review also indicates that when the product is high variety and low volume, project or functional production is applied.

  9. Finite Range Decomposition of Gaussian Processes

    CERN Document Server

    Brydges, C D; Mitter, P K

    2003-01-01

    Let $D$ be the finite difference Laplacian associated to the lattice $bZ^{d}$. For dimension $dge 3$, $age 0$ and $L$ a sufficiently large positive dyadic integer, we prove that the integral kernel of the resolvent $G^{a}:=(a-D)^{-1}$ can be decomposed as an infinite sum of positive semi-definite functions $ V_{n} $ of finite range, $ V_{n} (x-y) = 0$ for $|x-y|ge O(L)^{n}$. Equivalently, the Gaussian process on the lattice with covariance $G^{a}$ admits a decomposition into independent Gaussian processes with finite range covariances. For $a=0$, $ V_{n} $ has a limiting scaling form $L^{-n(d-2)}Gamma_{ c,ast }{bigl (frac{x-y}{ L^{n}}bigr )}$ as $nrightarrow infty$. As a corollary, such decompositions also exist for fractional powers $(-D)^{-alpha/2}$, $0

  10. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  11. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  12. Thermal decomposition of zirconium compounds with some aromatic hydroxycarboxylic acids

    Energy Technology Data Exchange (ETDEWEB)

    Koshel, A V; Malinko, L A; Karlysheva, K F; Sheka, I A; Shchepak, N I [AN Ukrainskoj SSR, Kiev. Inst. Obshchej i Neorganicheskoj Khimii

    1980-02-01

    By the thermogravimetry method investigated are processes of thermal decomposition of different zirconium compounds with mandelic, parabromomandelic, salicylic and sulphosalicylic acids. For identification of decomposition products the specimens have been kept at the temperature of effects up to the constant weight. Taken are IR-spectra, rentgenoarams, carried out is elementary analysis of decomposition products. It is stated that thermal decomposition of the investigated compounds passes in stages; the final product of thermolysis is ZrO/sub 2/. Nonhydrolized compounds are stable at heating in the air up to 200-265 deg. Hydroxy compounds begin to decompose at lower temperature (80-100 deg).

  13. Decomposition analysis of CO2 emissions from passenger cars: The cases of Greece and Denmark

    International Nuclear Information System (INIS)

    Papagiannaki, Katerina; Diakoulaki, Danae

    2009-01-01

    The paper presents a decomposition analysis of the changes in carbon dioxide (CO 2 ) emissions from passenger cars in Denmark and Greece, for the period 1990-2005. A time series analysis has been applied based on the logarithmic mean Divisia index I (LMDI I) methodology, which belongs to the wider family of index decomposition approaches. The particularity in road transport that justifies a profound analysis is its remarkably rapid growth during the last decades, followed by a respective increase in emissions. Denmark and Greece have been selected based on the challenging differences of specific socio-economic characteristics of these two small EU countries, as well as on the availability of detailed data used in the frame of the analysis. In both countries, passenger cars are responsible for half of the emissions from road transport as well as for their upward trend, which provokes the implementation of a decomposition analysis focusing exactly on this segment of road transport. The factors examined in the present decomposition analysis are related to vehicles ownership, fuel mix, annual mileage, engine capacity and technology of cars. The comparison of the results discloses the differences in the transportation profiles of the two countries and reveals how they affect the trend of CO 2 emissions.

  14. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    International Nuclear Information System (INIS)

    Kowal, Grzegorz; Lazarian, A.

    2010-01-01

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.

  15. Managing Soil Biota-Mediated Decomposition and Nutrient Mineralization in Sustainable Agroecosystems

    Directory of Open Access Journals (Sweden)

    Joann K. Whalen

    2014-01-01

    Full Text Available Transformation of organic residues into plant-available nutrients occurs through decomposition and mineralization and is mediated by saprophytic microorganisms and fauna. Of particular interest is the recycling of the essential plant elements—N, P, and S—contained in organic residues. If organic residues can supply sufficient nutrients during crop growth, a reduction in fertilizer use is possible. The challenge is synchronizing nutrient release from organic residues with crop nutrient demands throughout the growing season. This paper presents a conceptual model describing the pattern of nutrient release from organic residues in relation to crop nutrient uptake. Next, it explores experimental approaches to measure the physical, chemical, and biological barriers to decomposition and nutrient mineralization. Methods are proposed to determine the rates of decomposition and nutrient release from organic residues. Practically, this information can be used by agricultural producers to determine if plant-available nutrient supply is sufficient to meet crop demands at key growth stages or whether additional fertilizer is needed. Finally, agronomic practices that control the rate of soil biota-mediated decomposition and mineralization, as well as those that facilitate uptake of plant-available nutrients, are identified. Increasing reliance on soil biological activity could benefit crop nutrition and health in sustainable agroecosystems.

  16. Time space domain decomposition methods for reactive transport - Application to CO2 geological storage

    International Nuclear Information System (INIS)

    Haeberlein, F.

    2011-01-01

    Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)

  17. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Elizondo, Marcelo A.; Samaan, Nader A.

    2017-10-19

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage control problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.

  18. Hydrothermal decomposition of actinide(IV oxalates: a new aqueous route towards reactive actinide oxide nanocrystals

    Directory of Open Access Journals (Sweden)

    Walter Olaf

    2016-01-01

    Full Text Available The hydrothermal decomposition of actinide(IV oxalates (An= Th, U, Pu at temperatures between 95 and 250 °C is shown to lead to the production of highly crystalline, reactive actinide oxide nanocrystals (NCs. This aqueous process proved to be quantitative, reproducible and fast (depending on temperature. The NCs obtained were characterised by X-ray diffraction and TEM showing their size to be smaller than 15 nm. Attempts to extend this general approach towards transition metal or lanthanide oxalates failed in the 95–250 °C temperature range. The hydrothermal decomposition of actinide oxalates is therefore a clean, flexible and powerful approach towards NCs of AnO2 with possible scale-up potential.

  19. A low-dimensional tool for predicting force decomposition coefficients for varying inflow conditions

    KAUST Repository

    Ghommem, Mehdi; Akhtar, Imran; Hajj, M. R.

    2013-01-01

    We develop a low-dimensional tool to predict the effects of unsteadiness in the inflow on force coefficients acting on a circular cylinder using proper orthogonal decomposition (POD) modes from steady flow simulations. The approach is based on combining POD and linear stochastic estimator (LSE) techniques. We use POD to derive a reduced-order model (ROM) to reconstruct the velocity field. To overcome the difficulty of developing a ROM using Poisson's equation, we relate the pressure field to the velocity field through a mapping function based on LSE. The use of this approach to derive force decomposition coefficients (FDCs) under unsteady mean flow from basis functions of the steady flow is illustrated. For both steady and unsteady cases, the final outcome is a representation of the lift and drag coefficients in terms of velocity and pressure temporal coefficients. Such a representation could serve as the basis for implementing control strategies or conducting uncertainty quantification. Copyright © 2013 Inderscience Enterprises Ltd.

  20. Single interval longwave radiation scheme based on the net exchanged rate decomposition with bracketing

    Czech Academy of Sciences Publication Activity Database

    Geleyn, J.- F.; Mašek, Jan; Brožková, Radmila; Kuma, P.; Degrauwe, D.; Hello, G.; Pristov, N.

    2017-01-01

    Roč. 143, č. 704 (2017), s. 1313-1335 ISSN 0035-9009 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : numerical weather prediction * climate models * clouds * parameterization * atmospheres * formulation * absorption * scattering * accurate * database * longwave radiative transfer * broadband approach * idealized optical paths * net exchanged rate decomposition * bracketing * selective intermittency Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.444, year: 2016

  1. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    OpenAIRE

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-01-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential sca...

  2. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    Science.gov (United States)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  3. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  4. One-Channel Surface Electromyography Decomposition for Muscle Force Estimation

    Directory of Open Access Journals (Sweden)

    Wentao Sun

    2018-05-01

    Full Text Available Estimating muscle force by surface electromyography (sEMG is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs in two steps: (1 learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2 extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.

  5. Spectral decomposition in advection-diffusion analysis by finite element methods

    International Nuclear Information System (INIS)

    Nickell, R.E.; Gartling, D.K.; Strang, G.

    1978-01-01

    In a recent study of the convergence properties of finite element methods in nonlinear fluid mechanics, an indirect approach was taken. A two-dimensional example with a known exact solution was chosen as the vehicle for the study, and various mesh refinements were tested in an attempt to extract information on the effect of the local Reynolds number. However, more direct approaches are usually preferred. In this study one such direct approach is followed, based upon the spectral decomposition of the solution operator. Spectral decomposition is widely employed as a solution technique for linear structural dynamics problems and can be applied readily to linear, transient heat transfer analysis; in this case, the extension to nonlinear problems is of interest. It was shown previously that spectral techniques were applicable to stiff systems of rate equations, while recent studies of geometrically and materially nonlinear structural dynamics have demonstrated the increased information content of the numerical results. The use of spectral decomposition in nonlinear problems of heat and mass transfer would be expected to yield equally increased flow of information to the analyst, and this information could include a quantitative comparison of various solution strategies, meshes, and element hierarchies

  6. On practical challenges of decomposition-based hybrid forecasting algorithms for wind speed and solar irradiation

    International Nuclear Information System (INIS)

    Wang, Yamin; Wu, Lei

    2016-01-01

    This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to

  7. Photochemical decomposition of Formaldehyde in solution

    International Nuclear Information System (INIS)

    Garrido Z, G.

    1995-01-01

    In this work was studied the effect of ultraviolet radiation produced by a mercury low pressure lamp in solutions of formaldehyde. These solutions were exposed to ultraviolet rays at different times. In some of these series of solutions was added a photosensibilizer in order to obtain a high photodecomposition of formaldehyde. The techniques used for determine the products of the decomposition were the following: 1. In order to measure the residual formaldehyde and glioxal, the Hantzsch and 2,4-dinitrophenylhydrazine methods were used. 2. pH's measurements of the solutions, before and after exposition. 3. Paper's chromatography for determine presence of formed acids. 4. Acid-base tritiations for measure total acidification. We observed that when the time of exposition to UV rays was increased, a high photodecomposition of formaldehyde was formed and, besides, a greater quantity of another products. Of the reagents used like photosensibilizers, with the ruthenium reagent, the best results were obtained. (Author)

  8. Introduction - Acid decomposition of borosilicate ores

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Kurbonov, A.S.; Mamatov, E.D.

    2015-01-01

    The complex processing of mineral raw materials is an effective way for the extraction of valuable components. One of these raw materials are borosilicate ores from which the boric acid, aluminium and iron salts and building materials can be obtained. In the Institute of Chemistry of the Academy of Sciences of the Republic of Tajikistan the flowsheets of the processing of borosilicate raw materials by acid and chloric methods were elaborated. The acid methods of decomposition of borosilicate ores of Ak-Arkhar Deposit were considered in present monograph. The carried out researches on elaboration of physicochemical aspects and technological acid methods allowed to define the optimal ways of extraction of valuable products from borosilicate raw materials of Tajikistan.

  9. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  10. Covariant Conformal Decomposition of Einstein Equations

    Science.gov (United States)

    Gourgoulhon, E.; Novak, J.

    It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.

  11. Faddeev wave function decomposition using bipolar harmonics

    International Nuclear Information System (INIS)

    Friar, J.L.; Tomusiak, E.L.; Gibson, B.F.; Payne, G.L.

    1981-01-01

    The standard partial wave (channel) representation for the Faddeev solution to the Schroedinger equation for the ground state of 3 nucleons is written in terms of functions which couple the interacting pair and spectator angular momenta to give S, P, and D waves. For each such coupling there are three terms, one for each of the three cyclic permutations of the nucleon coordinates. A series of spherical harmonic identities is developed which allows writing the Faddeev solution in terms of a basis set of 5 bipolar harmonics: 1 for S waves; 1 for P waves; and 3 for D waves. The choice of a D-wave basis is largely arbitrary, and specific choices correspond to the decomposition schemes of Derrick and Blatt, Sachs, Gibson and Schiff, and Bolsterli and Jezak. The bipolar harmonic form greatly simplifies applications which utilize the wave function, and we specifically discuss the isoscalar charge (or mass) density and the 3 He Coulomb energy

  12. Nanoscale decomposition of Nb-Ru-O

    Science.gov (United States)

    Music, Denis; Geyer, Richard W.; Chen, Yen-Ting

    2016-11-01

    A correlative theoretical and experimental methodology has been employed to explore the decomposition of amorphous Nb-Ru-O at elevated temperatures. Density functional theory based molecular dynamics simulations reveal that amorphous Nb-Ru-O is structurally modified within 10 ps at 800 K giving rise to an increase in the planar metal - oxygen and metal - metal population and hence formation of large clusters, which signifies atomic segregation. The driving force for this atomic segregation process is 0.5 eV/atom. This is validated by diffraction experiments and transmission electron microscopy of sputter-synthesized Nb-Ru-O thin films. Room temperature samples are amorphous, while at 800 K nanoscale rutile RuO2 grains, self-organized in an amorphous Nb-O matrix, are observed, which is consistent with our theoretical predictions. This amorphous/crystalline interplay may be of importance for next generation of thermoelectric devices.

  13. Domain decomposition and multilevel integration for fermions

    International Nuclear Information System (INIS)

    Ce, Marco; Giusti, Leonardo; Schaefer, Stefan

    2016-01-01

    The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.

  14. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  15. Solution of the porous media equation by Adomian's decomposition method

    International Nuclear Information System (INIS)

    Pamuk, Serdal

    2005-01-01

    The particular exact solutions of the porous media equation that usually occurs in nonlinear problems of heat and mass transfer, and in biological systems are obtained using Adomian's decomposition method. Also, numerical comparison of particular solutions in the decomposition method indicate that there is a very good agreement between the numerical solutions and particular exact solutions in terms of efficiency and accuracy

  16. Focal decompositions for linear differential equations of the second order

    Directory of Open Access Journals (Sweden)

    L. Birbrair

    2003-01-01

    two-points problems to itself such that the image of the focal decomposition associated to the first equation is a focal decomposition associated to the second one. In this paper, we present a complete classification for linear second-order equations with respect to this equivalence relation.

  17. Three-dimensional decomposition models for carbon productivity

    International Nuclear Information System (INIS)

    Meng, Ming; Niu, Dongxiao

    2012-01-01

    This paper presents decomposition models for the change in carbon productivity, which is considered a key indicator that reflects the contributions to the control of greenhouse gases. Carbon productivity differential was used to indicate the beginning of decomposition. After integrating the differential equation and designing the Log Mean Divisia Index equations, a three-dimensional absolute decomposition model for carbon productivity was derived. Using this model, the absolute change of carbon productivity was decomposed into a summation of the absolute quantitative influences of each industrial sector, for each influence factor (technological innovation and industrial structure adjustment) in each year. Furthermore, the relative decomposition model was built using a similar process. Finally, these models were applied to demonstrate the decomposition process in China. The decomposition results reveal several important conclusions: (a) technological innovation plays a far more important role than industrial structure adjustment; (b) industry and export trade exhibit great influence; (c) assigning the responsibility for CO 2 emission control to local governments, optimizing the structure of exports, and eliminating backward industrial capacity are highly essential to further increase China's carbon productivity. -- Highlights: ► Using the change of carbon productivity to measure a country's contribution. ► Absolute and relative decomposition models for carbon productivity are built. ► The change is decomposed to the quantitative influence of three-dimension. ► Decomposition results can be used for improving a country's carbon productivity.

  18. Total Decomposition of Environmental Radionuclide Samples with a Microwave Oven

    International Nuclear Information System (INIS)

    Ramon Garcia, Bernd Kahn

    1998-01-01

    Closed-vessel microwave assisted acid decomposition was investigated as an alternative to traditional methods of sample dissolution/decomposition. This technique, used in analytical chemistry, has some potential advantages over other procedures. It requires less reagents, it is faster, and it has the potential of achieving total dissolution because of higher temperatures and pressures

  19. Multi hollow needle to plate plasmachemical reactor for pollutant decomposition

    International Nuclear Information System (INIS)

    Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.

    2001-01-01

    Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)

  20. Decomposition characteristics of maize ( Zea mays . L.) straw with ...

    African Journals Online (AJOL)

    Decomposition of maize straw incorporated into soil with various nitrogen amended carbon to nitrogen (C/N) ratios under a range of moisture was studied through a laboratory incubation trial. The experiment was set up to simulate the most suitable C/N ratio for straw carbon (C) decomposition and sequestering in the soil.

  1. Thermal decomposition of 2-methylbenzoates of rare earth elements

    International Nuclear Information System (INIS)

    Brzyska, W.; Szubartowski, L.

    1980-01-01

    The conditions of thermal decomposition of La, Ce(3), Pr, Nd, Sm and Y 2-methylbenzoates were examined. On the basis of obtained results it was stated that hydrated 2-methylbenzoates were subjected to dehydration passing into anhydrated salts and then they decomposed into oxides. The activation energy of dehydration and decomposition reactions of lanthanons, La and Y 2-methylbenzoates was determined. (author)

  2. Interacting effects of insects and flooding on wood decomposition.

    Science.gov (United States)

    Michael Ulyshen

    2014-01-01

    Saproxylic arthropods are thought to play an important role in wood decomposition but very few efforts have been made to quantify their contributions to the process and the factors controlling their activities are not well understood. In the current study, mesh exclusion bags were used to quantify how arthropods affect loblolly pine (Pinus taeda L.) decomposition rates...

  3. Doob's decomposition of set-valued submartingales via ordered ...

    African Journals Online (AJOL)

    We use ideas from measure-free martingale theory and R˚adstr¨om' completion of a near vector space to derive a Doob decomposition of submartingales in ordered near vector spaces. As a special cases thereof, we obtain the Doob decomposition of set-valued submartingales, as noted by Daures, Ni and Zhang, and an ...

  4. On reliability of singular-value decomposition in attractor reconstruction

    International Nuclear Information System (INIS)

    Palus, M.; Dvorak, I.

    1990-12-01

    Applicability of singular-value decomposition for reconstructing the strange attractor from one-dimensional chaotic time series, proposed by Broomhead and King, is extensively tested and discussed. Previously published doubts about its reliability are confirmed: singular-value decomposition, by nature a linear method, is only of a limited power when nonlinear structures are studied. (author). 29 refs, 9 figs

  5. Identifying key nodes in multilayer networks based on tensor decomposition.

    Science.gov (United States)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  6. An efficient and accurate decomposition of the Fermi operator.

    Science.gov (United States)

    Ceriotti, Michele; Kühne, Thomas D; Parrinello, Michele

    2008-07-14

    We present a method to compute the Fermi function of the Hamiltonian for a system of independent fermions based on an exact decomposition of the grand-canonical potential. This scheme does not rely on the localization of the orbitals and is insensitive to ill-conditioned Hamiltonians. It lends itself naturally to linear scaling as soon as the sparsity of the system's density matrix is exploited. By using a combination of polynomial expansion and Newton-like iterative techniques, an arbitrarily large number of terms can be employed in the expansion, overcoming some of the difficulties encountered in previous papers. Moreover, this hybrid approach allows us to obtain a very favorable scaling of the computational cost with increasing inverse temperature, which makes the method competitive with other Fermi operator expansion techniques. After performing an in-depth theoretical analysis of computational cost and accuracy, we test our approach on the density functional theory Hamiltonian for the metallic phase of the LiAl alloy.

  7. Estimating the decomposition of predictive information in multivariate systems

    Science.gov (United States)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  8. Nitrogen deposition does not enhance Sphagnum decomposition.

    Science.gov (United States)

    Manninen, S; Kivimäki, S; Leith, I D; Leeson, S R; Sheppard, L J

    2016-11-15

    Long-term additions of nitrogen (N) to peatlands have altered bryophyte growth, species dominance, N content in peat and peat water, and often resulted in enhanced Sphagnum decomposition rate. However, these results have mainly been derived from experiments in which N was applied as ammonium nitrate (NH4NO3), neglecting the fact that in polluted areas, wet deposition may be dominated either by NO3(-) or NH4(+). We studied effects of elevated wet deposition of NO3(-) vs. NH4(+) alone (8 or 56kgNha(-1)yr(-1) over and above the background of 8kgNha(-1)yr(-1) for 5 to 11years) or combined with phosphorus (P) and potassium (K) on Sphagnum quality for decomposers, mass loss, and associated changes in hummock pore water in an ombrotrophic bog (Whim). Adding N, especially as NH4(+), increased N concentration in Sphagnum, but did not enhance mass loss from Sphagnum. Mass loss seemed to depend mainly on moss species and climatic factors. Only high applications of N affected hummock pore water chemistry, which varied considerably over time. Overall, C and N cycling in this N treated bog appeared to be decoupled. We conclude that moss species, seasonal and annual variation in climatic factors, direct negative effects of N (NH4(+) toxicity) on Sphagnum production, and indirect effects (increase in pH and changes in plant species dominance under elevated NO3(-) alone and with PK) drive Sphagnum decomposition and hummock C and N dynamics at Whim. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Thermal Decomposition of Radiation-Damaged Polystyrene

    International Nuclear Information System (INIS)

    J Abrefah, J.; Klinger, G.S.

    2000-01-01

    The radiation-damaged polystyrene material (''polycube'') used in this study was synthesized by mixing a high-density polystyrene (''Dylene Fines No. 100'') with plutonium and uranium oxides. The polycubes were used on the Hanford Site in the 1960s for criticality studies to determine the hydrogen-to-fissile atom ratios for neutron moderation during processing of spent nuclear fuel. Upon completion of the studies, two methods were developed to reclaim the transuranic (TRU) oxides from the polymer matrix: (1) burning the polycubes in air at 873 K; and (2) heating the polycubes in the absence of oxygen and scrubbing the released monomer and other volatile organics using carbon tetrachloride. Neither of these methods was satisfactory in separating the TRU oxides from the polystyrene. Consequently, the remaining polycubes were sent to the Hanford Plutonium Finishing Plant (PFP) for storage. Over time, the high dose of alpha and gamma radiation has resulted in a polystyrene matrix that is highly cross-linked and hydrogen deficient and a stabilization process is being developed in support of Defense Nuclear Facility Safety Board Recommendation 94-1. Baseline processes involve thermal treatment to pyrolyze the polycubes in a furnace to decompose the polystyrene and separate out the TRU oxides. Thermal decomposition products from this degraded polystyrene matrix were characterized by Pacific Northwest National Laboratory to provide information for determining the environmental impact of the process and for optimizing the process parameters. A gas chromatography/mass spectrometry (GC/MS) system coupled to a horizontal tube furnace was used for the characterization studies. The decomposition studies were performed both in air and helium atmospheres at 773 K, the planned processing temperature. The volatile and semi-volatile organic products identified for the radiation-damaged polystyrene were different from those observed for virgin polystyrene. The differences were in the

  10. Litterfall and litter decomposition in chestnut high forest stands in northern Portugal

    Energy Technology Data Exchange (ETDEWEB)

    Patricio, M. S.; Nunes, L. F.; Pereira, E. L.

    2012-11-01

    This research aimed to: estimate the inputs of litterfall; model the decomposition process and assess the rates of litter decay and turnover; study the litter decomposition process and dynamics of nutrients in old chestnut high forests. This study aimed to fill a gap in the knowledge of chestnut decomposition process as this type of ecosystems have never been modeled and studied from this point of view in Portugal. The study sites are located in the mountains of Marao, Padrela and Bornes in a west-to-east transect, across northern Portugal, from a more-Atlantic-to-lessmaritime influence. This research was developed on old chestnut high forests for quality timber production submitted to a silviculture management close-to-nature. We collected litterfall using littertraps and studied decomposition of leaf and bur litter by the nylon net bag technique. Simple and double exponential models were used to describe the decomposition of chestnut litterfall incubated in situ during 559 days. The results of the decomposition are discussed in relation to the initial litter quality (C, N, P, K, Ca, Mg) and the decomposition rates. Annually, the mature chestnut high-forest stands (density 360-1,260 tree ha1, age 55-73 years old) restore 4.9 Mg DM ha–1 of litter and 2.6 Mg ha{sup -}1 yr{sup -}1 of carbon to the soil. The two-component litter decay model proved to be more biologically realistic, providing a decay rate for the fast initial stage (46-58 yr{sup -}1for the leaves and 38-42 yr{sup -}1for the burs) and a decay rate related to the recalcitrant pool (0.45-0.60 yr{sup -}1for the leaves and 0.22-0.36 yr{sup -}1for the burs). This study pointed to some decay patterns and release of bioelements by the litterfall which can be useful for calibrating existing models and indicators of sustainability to improve both silvicultural and environmental approaches for the management of chestnut forests. (Author) 45 refs.

  11. Ozone time scale decomposition and trend assessment from surface observations

    Science.gov (United States)

    Boleti, Eirini; Hueglin, Christoph; Takahama, Satoshi

    2017-04-01

    Emissions of ozone precursors have been regulated in Europe since around 1990 with control measures primarily targeting to industries and traffic. In order to understand how these measures have affected air quality, it is now important to investigate concentrations of tropospheric ozone in different types of environments, based on their NOx burden, and in different geographic regions. In this study, we analyze high quality data sets for Switzerland (NABEL network) and whole Europe (AirBase) for the last 25 years to calculate long-term trends of ozone concentrations. A sophisticated time scale decomposition method, called the Ensemble Empirical Mode Decomposition (EEMD) (Huang,1998;Wu,2009), is used for decomposition of the different time scales of the variation of ozone, namely the long-term trend, seasonal and short-term variability. This allows subtraction of the seasonal pattern of ozone from the observations and estimation of long-term changes of ozone concentrations with lower uncertainty ranges compared to typical methodologies used. We observe that, despite the implementation of regulations, for most of the measurement sites ozone daily mean values have been increasing until around mid-2000s. Afterwards, we observe a decline or a leveling off in the concentrations; certainly a late effect of limitations in ozone precursor emissions. On the other hand, the peak ozone concentrations have been decreasing for almost all regions. The evolution in the trend exhibits some differences between the different types of measurement. In addition, ozone is known to be strongly affected by meteorology. In the applied approach, some of the meteorological effects are already captured by the seasonal signal and already removed in the de-seasonalized ozone time series. For adjustment of the influence of meteorology on the higher frequency ozone variation, a statistical approach based on Generalized Additive Models (GAM) (Hastie,1990;Wood,2006), which corrects for meteorological

  12. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  13. The platinum catalysed decomposition of hydrazine in acidic media

    International Nuclear Information System (INIS)

    Ananiev, A.V.; Tananaev, I.G.; Brossard, Ph.; Broudic, J.C.

    2000-01-01

    Kinetic study of the hydrazine decomposition in the solutions of HClO 4 , H 2 SO 4 and HNO 3 in the presence of Pt/SiO 2 catalyst has been undertaken. It was shown that the kinetics of the hydrazine catalytic decomposition in HClO 4 and H 2 SO 4 are identical. The process is determined by the heterogeneous catalytic auto-decomposition of N 2 H 4 on the catalyst's surface. The platinum catalysed hydrazine decomposition in the nitric acid solutions is a complex process, including heterogeneous catalytic auto-decomposition of N 2 H 4 , reaction of hydrazine with catalytically generated nitrous acid and the catalytic oxidation of hydrazine by nitric acid. The kinetic parameters of these reactions have been determined. The contribution of each reaction in the total process is determined by the liquid phase composition and by the temperature. (authors)

  14. How trust in institutions and organizations builds general consumer confidence in the safety of food: A decomposition of effects

    NARCIS (Netherlands)

    Jonge, de J.; Trijp, van J.C.M.; Lans, van der I.A.; Renes, R.J.; Frewer, L.J.

    2008-01-01

    This paper investigates the relationship between general consumer confidence in the safety of food and consumer trust in institutions and organizations. More specifically, using a decompositional regression analysis approach, the extent to which the strength of the relationship between trust and

  15. Can differences in soil community composition after peat meadow restoration lead to different decomposition and mineralization rates?

    NARCIS (Netherlands)

    Dijk, van J.; Didden, W.A.M.; Kuenen, F.; Bodegom, van P.M.; Verhoef, H.A.; Aerts, R.

    2009-01-01

    Reducing decomposition and mineralization of organic matter by increasing groundwater levels is a common approach to reduce plant nutrient availability in many peat meadow restoration projects. The soil community is the main driver of these processes, but how community composition is affected by

  16. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.

    2007-12-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  17. s-core network decomposition: A generalization of k-core analysis to weighted networks

    Science.gov (United States)

    Eidsaa, Marius; Almaas, Eivind

    2013-12-01

    A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.

  18. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    Science.gov (United States)

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  19. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)

    2005-07-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  20. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    International Nuclear Information System (INIS)

    Girardi, E.; Ruggieri, J.M.

    2005-01-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  1. Influence of different forest system management practices on leaf litter decomposition rates, nutrient dynamics and the activity of ligninolytic enzymes: a case study from central European forests.

    Science.gov (United States)

    Purahong, Witoon; Kapturska, Danuta; Pecyna, Marek J; Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (Pforest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, Pforest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling.

  2. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru; Hong, Fan; Peterka, Tom

    2018-01-01

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the new assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.

  3. Calculation of shielding thickness by combining the LTSN and Decomposition methods

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco T. de

    1997-01-01

    A combination of the LTS N and Decomposition methods is reported to shielding thickness calculation. The angular flux is evaluated solving a transport problem in planar geometry considering the S N approximation, anisotropic scattering and one-group of energy. The Laplace transform is applied in the set of S N equations. The transformed angular flux is then obtained solving a transcendental equation and the angular flux is restored by the Heaviside expansion technique. The scalar flux is attained integrating the angular flux by Gaussian quadrature scheme. On the other hand, the scalar flux is linearly related to the dose rate through the mass and energy absorption coefficient. The shielding thickness is obtained solving a transcendental equation resulting from the application of the LTS N approach by the Decomposition methods. Numerical simulations are reported. (author). 6 refs., 3 tabs

  4. Noise reduction in digital speckle pattern interferometry using bidimensional empirical mode decomposition

    International Nuclear Information System (INIS)

    Bernini, Maria Belen; Federico, Alejandro; Kaufmann, Guillermo H.

    2008-01-01

    We propose a bidimensional empirical mode decomposition (BEMD) method to reduce speckle noise in digital speckle pattern interferometry (DSPI) fringes. The BEMD method is based on a sifting process that decomposes the DSPI fringes in a finite set of subimages represented by high and low frequency oscillations, which are named modes. The sifting process assigns the high frequency information to the first modes, so that it is possible to discriminate speckle noise from fringe information, which is contained in the remaining modes. The proposed method is a fully data-driven technique, therefore neither fixed basis functions nor operator intervention are required. The performance of the BEMD method to denoise DSPI fringes is analyzed using computer-simulated data, and the results are also compared with those obtained by means of a previously developed one-dimensional empirical mode decomposition approach. An application of the proposed BEMD method to denoise experimental fringes is also presented

  5. Structural Analysis of Multi-component Amyloid Systems by Chemometric SAXS Data Decomposition

    DEFF Research Database (Denmark)

    Trillo, Isabel Fatima Herranz; Jensen, Minna Grønning; van Maarschalkerweerd, Andreas

    2017-01-01

    Formation of amyloids is the hallmark of several neurodegenerative pathologies. Structural investigation of these complex transformation processes poses significant experimental challenges due to the co-existence of multiple species. The additive nature of small-angle X-ray scattering (SAXS) data...... least squares (MCR-ALS) chemometric method. The approach enables rigorous and robust decomposition of synchrotron SAXS data by simultaneously introducing these data in different representations that emphasize molecular changes at different time and structural resolution ranges. The approach has allowed...

  6. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  7. Decomposition of dioxin analogues and ablation study for carbon nanotube

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko

    2002-01-01

    Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)

  8. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  9. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  10. Using combinatorial problem decomposition for optimizing plutonium inventory management

    International Nuclear Information System (INIS)

    Niquil, Y.; Gondran, M.; Voskanian, A.; Paris-11 Univ., 91 - Orsay

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author)

  11. Decomposition of ammonium nitrate in homogeneous and catalytic denitration

    International Nuclear Information System (INIS)

    Anan'ev, A. V.; Tananaev, I. G.; Shilov, V. P.

    2005-01-01

    Ammonium nitrate is one of potentially explosive by-products of spent fuel reprocessing. Decomposition of ammonium nitrate in the HNO 3 -HCOOH system was studied in the presence or absence of Pt/SiO 2 catalyst. It was found that decomposition of ammonium nitrate is due to homogeneous noncatalytic oxidation of ammonium ion with nitrous acid generated in the HNO 3 -HCOOH system during denitration. The platinum catalyst initiates the reaction of HNO 3 with HCOOH to form HNO 2 . The regular trends were revealed and the optimal conditions of decomposition of ammonium nitrate in nitric acid solutions were found [ru

  12. Thermal decomposition of UO3-2H20

    International Nuclear Information System (INIS)

    Flament, T.A.

    1998-01-01

    The first part of the report summarizes the literature data regarding the uranium trioxide water system. In the second part, the experimental aspects are presented. An experimental program has been set up to determine the steps and species involved in decomposition of uranium oxide di-hydrate. Particular attention has been paid to determine both loss of free water (moisture in the fuel) and loss of chemically bound water (decomposition of hydrates). The influence of water pressure on decomposition has been taken into account

  13. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne

    2017-01-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...... regulating the rate at which plant biomass is decomposed into products such as CO2. Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature...

  14. Decomposition of aboveground biomass of a herbaceous wetland stand

    OpenAIRE

    KLIMOVIČOVÁ, Lucie

    2010-01-01

    The master?s thesis is part of the project GA ČR č. P504/11/1151- Role of plants in the greenhouse gas budget of a sedge fen. This thesis deals with the decomposition of aboveground vegetation in a herbaceous wetland. The decomposition rate was established on the flooded part of the Wet Meadows near Třeboň. The rate of the decomposition processes was evaluated using the litter-bag method. Mesh bags filled with dry plant matter were located in the vicinity of the automatic meteorological stati...

  15. Singular Value Decomposition and Ligand Binding Analysis

    Directory of Open Access Journals (Sweden)

    André Luiz Galo

    2013-01-01

    Full Text Available Singular values decomposition (SVD is one of the most important computations in linear algebra because of its vast application for data analysis. It is particularly useful for resolving problems involving least-squares minimization, the determination of matrix rank, and the solution of certain problems involving Euclidean norms. Such problems arise in the spectral analysis of ligand binding to macromolecule. Here, we present a spectral data analysis method using SVD (SVD analysis and nonlinear fitting to determine the binding characteristics of intercalating drugs to DNA. This methodology reduces noise and identifies distinct spectral species similar to traditional principal component analysis as well as fitting nonlinear binding parameters. We applied SVD analysis to investigate the interaction of actinomycin D and daunomycin with native DNA. This methodology does not require prior knowledge of ligand molar extinction coefficients (free and bound, which potentially limits binding analysis. Data are acquired simply by reconstructing the experimental data and by adjusting the product of deconvoluted matrices and the matrix of model coefficients determined by the Scatchard and McGee and von Hippel equation.

  16. Prepared by Thermal Hydro-decomposition

    Science.gov (United States)

    Prasoetsopha, N.; Pinitsoontorn, S.; Kamwanna, T.; Kurosaki, K.; Ohishi, Y.; Muta, H.; Yamanaka, S.

    2014-06-01

    The polycrystalline samples of Ca3Co4- x Ga x O9+ δ (0 ≤ x ≤ 0.15) were prepared by a simple thermal hydro-decomposition method. The high density ceramics were fabricated using a spark plasma sintering technique. The crystal structure of calcined powders was characterized by x-ray diffraction. The single phase of Ca3Co4- x Ga x O9+ δ was obtained. The scanning electron micrograph illustrated the grain alignment perpendicular to the direction of the pressure in the sintering process. The evidence from x-ray absorption near edge spectra were used to confirm the oxidation state of the Ga dopant. The thermoelectric properties of the misfit-layered of Ca3Co4- x Ga x O9+ δ were investigated. Seebeck coefficient tended to decrease with increasing Ga content due to the hole-doping effect. The electrical resistivity and thermal conductivity were monotonically decreased with increasing Ga content. The Ga doping of x = 0.15 showed the highest power factor of 3.99 × 10-4 W/mK2 at 1,023 K and the lowest thermal conductivity of 1.45 W/mK at 1,073 K. This resulted in the highest ZT of 0.29 at 1,073 K. From the optical absorption spectra, the electronic structure near the Fermi level show no significant change with Ga doping.

  17. Thermal decomposition of irradiated casein molecules

    Energy Technology Data Exchange (ETDEWEB)

    Aly, M A; Elsayed, A A [Biophysics Dept., Faculty of Science, Cairo University, Giza (Egypt)

    1997-12-31

    Non-isothermal studies were carried out using the derivatograph where thermogravimetry (TG), and differential thermogravimetry (DTG) measurements were used to obtain the activation energies of the first and second reactions for casein decomposition before and after exposure to gamma rays and fast neutrons. Cf- 252 was used as a source of fast neutrons associated with gamma rays. TG and DTG patterns were also recorded for casein samples before and after irradiation with 1 Gy gamma-rays of 0.662 MeV from Cs - 137. However, no change in a activation energies were observed after exposure to gamma-irradiation. On the other hand, the activation energies for first and second reactions were found to be smaller at 0.4 m Gy than that at lower and higher neutron doses. However, no change in activation energies was observed after {gamma} irradiation. It is concluded from the present study that destruction of casein molecules by low level fast neutron doses may lead to changes of shelf storage period milk. 3 figs., 1 tab.

  18. Plasma-catalytic decomposition of TCE

    Energy Technology Data Exchange (ETDEWEB)

    Vandenbroucke, A.; Morent, R.; De Geyter, N.; Leys, C. [Ghent Univ., Ghent (Belgium). Dept. of Applied Physics; Tuan, N.D.M.; Giraudon, J.M.; Lamonier, J.F. [Univ. des Sciences et Technologies de Lille, Villeneuve (France). Dept. de Catalyse et Chimie du Solide

    2010-07-01

    Volatile organic compounds (VOCs) are gaseous pollutants that pose an environmental hazard due to their high volatility and their possible toxicity. Conventional technologies to reduce the emission of VOCs have their advantages, but they become cost-inefficient when low concentrations have to be treated. In the past 2 decades, non-thermal plasma technology has received growing attention as an alternative and promising remediation method. Non-thermal plasmas are effective because they produce a series of strong oxidizers such as ozone, oxygen radicals and hydroxyl radicals that provide a reactive chemical environment in which VOCs are completely oxidized. This study investigated whether the combination of NTP and catalysis could improve the energy efficiency and the selectivity towards carbon dioxide (CO{sub 2}). Trichloroethylene (TCE) was decomposed by non-thermal plasma generated in a DC-excited atmospheric pressure glow discharge. The production of by-products was qualitatively investigated through FT-IR spectrometry. The results were compared with those from a catalytic reactor. The removal rate of TCE reached a maximum of 78 percent at the highest input energy. The by-products of TCE decomposition were CO{sub 2}, carbon monoxide (CO) hydrochloric acid (HCl) and dichloroacetylchloride. Combining the plasma system with a catalyst located in an oven downstream resulted in a maximum removal of 80 percent, at an energy density of 300 J/L, a catalyst temperature of 373 K and a total air flow rate of 2 slm. 14 refs., 6 figs.

  19. Renormalization-group theory of spinodal decomposition

    International Nuclear Information System (INIS)

    Mazenko, G.F.; Valls, O.T.; Zhang, F.C.

    1985-01-01

    Renormalization-group (RG) methods developed previously for the study of the growth of order in unstable systems are extended to treat the spinodal decomposition of the two-dimensional spin-exchange kinetic Ising model. The conservation of the order parameter and fixed-length sum rule are properly preserved in the theory. Various correlation functions in both coordinate and momentum space are calculated as functions of time. The scaling function for the structure factor is extracted. We compare our results with direct Monte Carlo (MC) simulations and find them in good agreement. The time rescaling parameter entering the RG analysis is temperature dependent, as was determined in previous work through a RG analysis of MC simulations. The results exhibit a long-time logarithmic growth law for the typical domain size, both analytically and numerically. In the time region where MC simulations have previously been performed, the logarithmic growth law can be fitted to a power law with an effective exponent. This exponent is found to be in excellent agreement with the result of MC simulations. The logarithmic growth law agrees with a physical model of interfacial motion which involves an interplay between the local curvature and an activated jump across the interface

  20. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.