WorldWideScience

Sample records for problems combining decomposition

  1. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  2. Solution of the linearly anisotropic neutron transport problem in a infinite cylinder combining the decomposition and HTSN methods

    International Nuclear Information System (INIS)

    Goncalves, Glenio A.; Bodmann, Bardo; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    Analytical solutions for neutron transport in cylindrical geometry is available for isotropic problems, but to the best of our knowledge for anisotropic problems are not available, yet. In this work, an analytical solution for the neutron transport equation in an infinite cylinder assuming anisotropic scattering is reported. Here we specialize the solution, without loss of generality, for the linearly anisotropic problem using the combined decomposition and HTS N methods. The key feature of this method consists in the application of the decomposition method to the anisotropic problem by virtue of the fact that the inverse of the operator associated to isotropic problem is well know and determined by the HTS N approach. So far, following the idea of the decomposition method, we apply this operator to the integral term, assuming that the angular flux appearing in the integrand is considered to be equal to the HTS N solution interpolated by polynomial considering only even powers. This leads to the first approximation for an anisotropic solution. Proceeding further, we replace this solution for the angular flux in the integral and apply again the inverse operator for the isotropic problem in the integral term and obtain a new approximation for the angular flux. This iterative procedure yields a closed form solution for the angular flux. This methodology can be generalized, in a straightforward manner, for transport problems with any degree of anisotropy. For the sake of illustration, we report numerical simulations for linearly anisotropic transport problems. (author)

  3. A Benders decomposition approach for a combined heat and power economic dispatch

    International Nuclear Information System (INIS)

    Abdolmohammadi, Hamid Reza; Kazemi, Ahad

    2013-01-01

    Highlights: • Benders decomposition algorithm to solve combined heat and power economic dispatch. • Decomposing the CHPED problem into master problem and subproblem. • Considering non-convex heat-power feasible region efficiently. • Solving 4 units and 5 units system with 2 and 3 co-generation units, respectively. • Obtaining better or as well results in terms of objective values. - Abstract: Recently, cogeneration units have played an increasingly important role in the utility industry. Therefore the optimal utilization of multiple combined heat and power (CHP) systems is an important optimization task in power system operation. Unlike power economic dispatch, which has a single equality constraint, two equality constraints must be met in combined heat and power economic dispatch (CHPED) problem. Moreover, in the cogeneration units, the power capacity limits are functions of the unit heat productions and the heat capacity limits are functions of the unit power generations. Thus, CHPED is a complicated optimization problem. In this paper, an algorithm based on Benders decomposition (BD) is proposed to solve the economic dispatch (ED) problem for cogeneration systems. In the proposed method, combined heat and power economic dispatch problem is decomposed into a master problem and subproblem. The subproblem generates the Benders cuts and master problem uses them as a new inequality constraint which is added to the previous constraints. The iterative process will continue until upper and lower bounds of the objective function optimal values are close enough and a converged optimal solution is found. Benders decomposition based approach is able to provide a good framework to consider the non-convex feasible operation regions of cogeneration units efficiently. In this paper, a four-unit system with two cogeneration units and a five-unit system with three cogeneration units are analyzed to exhibit the effectiveness of the proposed approach. In all cases, the

  4. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  5. Expert vs. novice: Problem decomposition/recomposition in engineering design

    Science.gov (United States)

    Song, Ting

    The purpose of this research was to investigate the differences of using problem decomposition and problem recomposition among dyads of engineering experts, dyads of engineering seniors, and dyads of engineering freshmen. Fifty participants took part in this study. Ten were engineering design experts, 20 were engineering seniors, and 20 were engineering freshmen. Participants worked in dyads to complete an engineering design challenge within an hour. The entire design process was video and audio recorded. After the design session, members participated in a group interview. This study used protocol analysis as the methodology. Video and audio data were transcribed, segmented, and coded. Two coding systems including the FBS ontology and "levels of the problem" were used in this study. A series of statistical techniques were used to analyze data. Interview data and participants' design sketches also worked as supplemental data to help answer the research questions. By analyzing the quantitative and qualitative data, it was found that students used less problem decomposition and problem recomposition than engineer experts in engineering design. This result implies that engineering education should place more importance on teaching problem decomposition and problem recomposition. Students were found to spend less cognitive effort when considering the problem as a whole and interactions between subsystems than engineer experts. In addition, students were also found to spend more cognitive effort when considering details of subsystems. These results showed that students tended to use dept-first decomposition and experts tended to use breadth-first decomposition in engineering design. The use of Function (F), Behavior (B), and Structure (S) among engineering experts, engineering seniors, and engineering freshmen was compared on three levels. Level 1 represents designers consider the problem as an integral whole, Level 2 represents designers consider interactions between

  6. A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Mei-qun Jiang; Pei-liang Dai

    2006-01-01

    A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.

  7. Domain decomposition methods for solving an image problem

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)

    1994-12-31

    The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.

  8. Calculation of shielding thickness by combining the LTSN and Decomposition methods

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco T. de

    1997-01-01

    A combination of the LTS N and Decomposition methods is reported to shielding thickness calculation. The angular flux is evaluated solving a transport problem in planar geometry considering the S N approximation, anisotropic scattering and one-group of energy. The Laplace transform is applied in the set of S N equations. The transformed angular flux is then obtained solving a transcendental equation and the angular flux is restored by the Heaviside expansion technique. The scalar flux is attained integrating the angular flux by Gaussian quadrature scheme. On the other hand, the scalar flux is linearly related to the dose rate through the mass and energy absorption coefficient. The shielding thickness is obtained solving a transcendental equation resulting from the application of the LTS N approach by the Decomposition methods. Numerical simulations are reported. (author). 6 refs., 3 tabs

  9. Solving network design problems via decomposition, aggregation and approximation

    CERN Document Server

    Bärmann, Andreas

    2016-01-01

    Andreas Bärmann develops novel approaches for the solution of network design problems as they arise in various contexts of applied optimization. At the example of an optimal expansion of the German railway network until 2030, the author derives a tailor-made decomposition technique for multi-period network design problems. Next, he develops a general framework for the solution of network design problems via aggregation of the underlying graph structure. This approach is shown to save much computation time as compared to standard techniques. Finally, the author devises a modelling framework for the approximation of the robust counterpart under ellipsoidal uncertainty, an often-studied case in the literature. Each of these three approaches opens up a fascinating branch of research which promises a better theoretical understanding of the problem and an increasing range of solvable application settings at the same time. Contents Decomposition for Multi-Period Network Design Solving Network Design Problems via Ag...

  10. Adomian decomposition method for nonlinear Sturm-Liouville problems

    Directory of Open Access Journals (Sweden)

    Sennur Somali

    2007-09-01

    Full Text Available In this paper the Adomian decomposition method is applied to the nonlinear Sturm-Liouville problem-y" + y(tp=λy(t, y(t > 0, t ∈ I = (0, 1, y(0 = y(1 = 0, where p > 1 is a constant and λ > 0 is an eigenvalue parameter. Also, the eigenvalues and the behavior of eigenfuctions of the problem are demonstrated.

  11. Using combinatorial problem decomposition for optimizing plutonium inventory management

    International Nuclear Information System (INIS)

    Niquil, Y.; Gondran, M.; Voskanian, A.; Paris-11 Univ., 91 - Orsay

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author)

  12. Using combinatorial problem decomposition for optimizing plutonium inventory management

    Energy Technology Data Exchange (ETDEWEB)

    Niquil, Y.; Gondran, M. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches; Voskanian, A. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches]|[Paris-11 Univ., 91 - Orsay (France). Lab. de Recherche en Informatique

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author) 7 refs.

  13. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  14. Domain decomposition methods for the neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2010-01-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)

  15. Generalized Benders’ Decomposition for topology optimization problems

    DEFF Research Database (Denmark)

    Munoz Queupumil, Eduardo Javier; Stolpe, Mathias

    2011-01-01

    ) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality.......This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness...

  16. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  17. Hybrid subgroup decomposition method for solving fine-group eigenvalue transport problems

    International Nuclear Information System (INIS)

    Yasseri, Saam; Rahnema, Farzad

    2014-01-01

    Highlights: • An acceleration technique for solving fine-group eigenvalue transport problems. • Coarse-group quasi transport theory to solve coarse-group eigenvalue transport problems. • Consistent and inconsistent formulations for coarse-group quasi transport theory. • Computational efficiency amplified by a factor of 2 using hybrid SGD for 1D BWR problem. - Abstract: In this paper, a new hybrid method for solving fine-group eigenvalue transport problems is developed. This method extends the subgroup decomposition method to efficiently couple a new coarse-group quasi transport theory with a set of fixed-source transport decomposition sweeps to obtain the fine-group transport solution. The advantages of the quasi transport theory are its high accuracy, straight-forward implementation and numerical stability. The hybrid method is analyzed for a 1D benchmark problem characteristic of boiling water reactors (BWR). It is shown that the method reproduces the fine-group transport solution with high accuracy while increasing the computational efficiency up to 12 times compared to direct fine-group transport calculations

  18. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem; Methodes de decomposition de domaine pour la formulation mixte duale du probleme critique de la diffusion des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P

    2007-12-15

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  19. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  20. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.

    2007-12-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  1. Solving and Interpreting Large-scale Harvest Scheduling Problems by Duality and Decomposition

    OpenAIRE

    Berck, Peter; Bible, Thomas

    1982-01-01

    This paper presents a solution to the forest planning problem that takes advantage of both the duality of linear programming formulations currently being used for harvest scheduling and the characteristics of decomposition inherent in the forest land class-relationship. The subproblems of decomposition, defined as the dual, can be solved in a simple, recursive fashion. In effect, such a technique reduces the computational burden in terms of time and computer storage as compared to the traditi...

  2. Domain decomposition method for solving elliptic problems in unbounded domains

    International Nuclear Information System (INIS)

    Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.

    1991-01-01

    Computational aspects of the box domain decomposition (DD) method for solving boundary value problems in an unbounded domain are discussed. A new variant of the DD-method for elliptic problems in unbounded domains is suggested. It is based on the partitioning of an unbounded domain adapted to the given asymptotic decay of an unknown function at infinity. The comparison of computational expenditures is given for boundary integral method and the suggested DD-algorithm. 29 refs.; 2 figs.; 2 tabs

  3. The application of the fall-vector method in decomposition schemes for the solution of integer linear programming problems

    International Nuclear Information System (INIS)

    Sergienko, I.V.; Golodnikov, A.N.

    1984-01-01

    This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints

  4. Linear decomposition approach for a class of nonconvex programming problems.

    Science.gov (United States)

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  5. Chemical physics of decomposition of energetic materials. Problems and prospects

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2004-01-01

    The review is concerned with analysis of the results obtained in the kinetic and mechanistic studies on decomposition of energetic materials (explosives, powders and solid propellants). It is shown that the state-of-the art in this field is inadequate to the potential of modern chemical kinetics and chemical physics. Unsolved problems are outlined and ways of their solution are proposed.

  6. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  7. Appling Laplace Adomian decomposition method for delay differential equations with boundary value problems

    Science.gov (United States)

    Yousef, Hamood Mohammed; Ismail, Ahmad Izani

    2017-11-01

    In this paper, Laplace Adomian decomposition method (LADM) was applied to solve Delay differential equations with Boundary Value Problems. The solution is in the form of a convergent series which is easy to compute. This approach is tested on two test problem. The findings obtained exhibit the reliability and efficiency of the proposed method.

  8. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  9. Chaotic Multiobjective Evolutionary Algorithm Based on Decomposition for Test Task Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Hui Lu

    2014-01-01

    Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.

  10. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    Science.gov (United States)

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  11. A MODIFIED DECOMPOSITION METHOD FOR SOLVING NONLINEAR PROBLEM OF FLOW IN CONVERGING- DIVERGING CHANNEL

    Directory of Open Access Journals (Sweden)

    MOHAMED KEZZAR

    2015-08-01

    Full Text Available In this research, an efficient technique of computation considered as a modified decomposition method was proposed and then successfully applied for solving the nonlinear problem of the two dimensional flow of an incompressible viscous fluid between nonparallel plane walls. In fact this method gives the nonlinear term Nu and the solution of the studied problem as a power series. The proposed iterative procedure gives on the one hand a computationally efficient formulation with an acceleration of convergence rate and on the other hand finds the solution without any discretization, linearization or restrictive assumptions. The comparison of our results with those of numerical treatment and other earlier works shows clearly the higher accuracy and efficiency of the used Modified Decomposition Method.

  12. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  13. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  14. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  15. A balancing domain decomposition method by constraints for advection-diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Tu, Xuemin; Li, Jing

    2008-12-10

    The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.

  16. Trends in catalytic NO decomposition over transition metal surfaces

    DEFF Research Database (Denmark)

    Falsig, Hanne; Bligaard, Thomas; Rass-Hansen, Jeppe

    2007-01-01

    The formation of NOx from combustion of fossil and renewable fuels continues to be a dominant environmental issue. We take one step towards rationalizing trends in catalytic activity of transition metal catalysts for NO decomposition by combining microkinetic modelling with density functional...... theory calculations. We show specifically why the key problem in using transition metal surfaces to catalyze direct NO decomposition is their significant relative overbinding of atomic oxygen compared to atomic nitrogen....

  17. The use of Adomian decomposition method for solving problems in calculus of variations

    Directory of Open Access Journals (Sweden)

    Mehdi Dehghan

    2006-01-01

    Full Text Available In this paper, a numerical method is presented for finding the solution of some variational problems. The main objective is to find the solution of an ordinary differential equation which arises from the variational problem. This work is done using Adomian decomposition method which is a powerful tool for solving large amount of problems. In this approach, the solution is found in the form of a convergent power series with easily computed components. To show the efficiency of the method, numerical results are presented.

  18. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  19. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  20. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    Science.gov (United States)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  1. Classification Formula and Generation Algorithm of Cycle Decomposition Expression for Dihedral Groups

    Directory of Open Access Journals (Sweden)

    Dakun Zhang

    2013-01-01

    Full Text Available The necessary of classification research on common formula of group (dihedral group cycle decomposition expression is illustrated. It includes the reflection and rotation conversion, which derived six common formulae on cycle decomposition expressions of group; it designed the generation algorithm on the cycle decomposition expressions of group, which is based on the method of replacement conversion and the classification formula; algorithm analysis and the results of the process show that the generation algorithm which is based on the classification formula is outperformed by the general algorithm which is based on replacement conversion; it has great significance to solve the enumeration of the necklace combinational scheme, especially the structural problems of combinational scheme, by using group theory and computer.

  2. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  3. Decomposition and parallelization strategies for solving large-scale MDO problems

    Energy Technology Data Exchange (ETDEWEB)

    Grauer, M.; Eschenauer, H.A. [Research Center for Multidisciplinary Analyses and Applied Structural Optimization, FOMAAS, Univ. of Siegen (Germany)

    2007-07-01

    During previous years, structural optimization has been recognized as a useful tool within the discriptiones of engineering and economics. However, the optimization of large-scale systems or structures is impeded by an immense solution effort. This was the reason to start a joint research and development (R and D) project between the Institute of Mechanics and Control Engineering and the Information and Decision Sciences Institute within the Research Center for Multidisciplinary Analyses and Applied Structural Optimization (FOMAAS) on cluster computing for parallel and distributed solution of multidisciplinary optimization (MDO) problems based on the OpTiX-Workbench. Here the focus of attention will be put on coarsegrained parallelization and its implementation on clusters of workstations. A further point of emphasis was laid on the development of a parallel decomposition strategy called PARDEC, for the solution of very complex optimization problems which cannot be solved efficiently by sequential integrated optimization. The use of the OptiX-Workbench together with the FEM ground water simulation system FEFLOW is shown for a special water management problem. (orig.)

  4. Effects of Problem Decomposition (Partitioning) on the Rate of Convergence of Parallel Numerical Algorithms

    Czech Academy of Sciences Publication Activity Database

    Cullum, J. K.; Johnson, K.; Tůma, Miroslav

    2003-01-01

    Roč. 10, - (2003), s. 445-465 ISSN 1070-5325 R&D Projects: GA ČR GA201/02/0595; GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : parallel algorithms * graph partitioning * problem decomposition * rate of convergence Subject RIV: BA - General Mathematics Impact factor: 1.042, year: 2003

  5. A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Chun Wang

    2017-01-01

    Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

  6. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed; Ghanem, Bernard; Wonka, Peter

    2015-01-01

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  7. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed

    2015-12-07

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  8. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  9. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  10. Representation of discrete Steklov-Poincare operator arising in domain decomposition methods in wavelet basis

    Energy Technology Data Exchange (ETDEWEB)

    Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)

    1996-12-31

    This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.

  11. Solution of the neutron transport problem with anisotropic scattering in cylindrical geometry by the decomposition method

    International Nuclear Information System (INIS)

    Goncalves, G.A.; Bogado Leite, S.Q.; Vilhena, M.T. de

    2009-01-01

    An analytical solution has been obtained for the one-speed stationary neutron transport problem, in an infinitely long cylinder with anisotropic scattering by the decomposition method. Series expansions of the angular flux distribution are proposed in terms of suitably constructed functions, recursively obtainable from the isotropic solution, to take into account anisotropy. As for the isotropic problem, an accurate closed-form solution was chosen for the problem with internal source and constant incident radiation, obtained from an integral transformation technique and the F N method

  12. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  13. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Energy Technology Data Exchange (ETDEWEB)

    Donald Estep; Michael Holst; Simon Tavener

    2010-02-08

    This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.

  14. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  15. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  16. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  17. Problem Decomposition and Recomposition in Engineering Design: A Comparison of Design Behavior between Professional Engineers, Engineering Seniors, and Engineering Freshmen

    Science.gov (United States)

    Song, Ting; Becker, Kurt; Gero, John; DeBerard, Scott; DeBerard, Oenardi; Reeve, Edward

    2016-01-01

    The authors investigated the differences in using problem decomposition and problem recomposition between dyads of engineering experts, engineering seniors, and engineering freshmen. Participants worked in dyads to complete an engineering design challenge within 1 hour. The entire design process was video and audio recorded. After the design…

  18. Embedding Number-Combinations Practice Within Word-Problem Tutoring

    Science.gov (United States)

    Powell, Sarah R.; Fuchs, Lynn S.; Fuchs, Douglas

    2012-01-01

    Two aspects of mathematics with which students with mathematics learning difficulty (MLD) often struggle are word problems and number-combination skills. This article describes a math program in which students receive instruction on using algebraic equations to represent the underlying problem structure for three word-problem types. Students also learn counting strategies for answering number combinations that they cannot retrieve from memory. Results from randomized-control trials indicated that embedding the counting strategies for number combinations produces superior word-problem and number-combination outcomes for students with MLD beyond tutoring programs that focus exclusively on number combinations or word problems. PMID:22661880

  19. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...

  20. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  1. Domain decomposition method for nonconforming finite element approximations of anisotropic elliptic problems on nonmatching grids

    Energy Technology Data Exchange (ETDEWEB)

    Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)

    1996-12-31

    An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.

  2. An algorithmic decomposition of claw-free graphs leading to an O(n^3) algorithm for the weighted stable set problem

    OpenAIRE

    Faenza, Y.; Oriolo, G.; Stauffer, G.

    2011-01-01

    We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...

  3. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  4. A hybrid Dantzig-Wolfe, Benders decomposition and column generation procedure for multiple diet production planning under uncertainties

    Science.gov (United States)

    Udomsungworagul, A.; Charnsethikul, P.

    2018-03-01

    This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.

  5. Nonconformity problem in 3D Grid decomposition

    Czech Academy of Sciences Publication Activity Database

    Kolcun, Alexej

    2002-01-01

    Roč. 10, č. 1 (2002), s. 249-253 ISSN 1213-6972. [International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002/10./. Plzeň, 04.02.2002-08.02.2002] R&D Projects: GA ČR GA105/99/1229; GA ČR GA105/01/1242 Institutional research plan: CEZ:AV0Z3086906 Keywords : structured mesh * decomposition * nonconformity Subject RIV: BA - General Mathematics

  6. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  7. Solving radiative transfer problems in highly heterogeneous media via domain decomposition and convergence acceleration techniques

    International Nuclear Information System (INIS)

    Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D.; Mostacci, Domiziano

    2011-01-01

    This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.

  8. Combined X-ray and Raman Studies on the Effect of Cobalt Additives on the Decomposition of Magnesium Borohydride

    Directory of Open Access Journals (Sweden)

    Olena Zavorotynska

    2015-08-01

    Full Text Available Magnesium borohydride (Mg(BH42 is one of the most promising hydrogen storage materials. Its kinetics of hydrogen desorption, reversibility, and complex reaction pathways during decomposition and rehydrogenation, however, present a challenge, which has been often addressed by using transition metal compounds as additives. In this work the decomposition of Mg(BH42 ball-milled with CoCl2 and CoF2 additives, was studied by means of a combination of several in-situ techniques. Synchrotron X-ray diffraction and Raman spectroscopy were used to follow the phase transitions and decomposition of Mg(BH42. By comparison with pure milled Mg(BH42, the temperature for the γ → ε phase transition in the samples with CoF2 or CoCl2 additives was reduced by 10–45 °C. In-situ Raman measurements showed the formation of a decomposition phase with vibrations at 2513, 2411 and 766 cm−1 in the sample with CoF2. Simultaneous X-ray absorption measurements at the Co K-edge revealed that the additives chemically transformed to other species. CoF2 slowly reacted upon heating till ~290 °C, whereas CoCl2 transformed drastically at ~180 °C.

  9. Parallel processing for pitch splitting decomposition

    Science.gov (United States)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  10. Using normalized equations to solve the indetermination problem in the Oaxaca-Blinder decomposition: an application to the gender wage gap in Brazil

    OpenAIRE

    Scorzafave,Luiz Guilherme; Pazello,Elaine Toldo

    2007-01-01

    There are hundreds of works that implement the Oaxaca-Blinder decomposition. However, this decomposition is not invariant to the choice of reference group when dummy variables are used. This paper applies the solution proposed by Yun (005a,b) for this identification problem to Brazilian gender wage gap estimation. Our principal finding is the increasing difference in part-time work coefficients between men and women, which contributes to narrow the gender wage gap. Other studies in Brazil not...

  11. Thermal decomposition process of silver behenate

    International Nuclear Information System (INIS)

    Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang

    2006-01-01

    The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles

  12. Combined algorithms in nonlinear problems of magnetostatics

    International Nuclear Information System (INIS)

    Gregus, M.; Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.

    1988-01-01

    To solve boundary problems of magnetostatics in unbounded two- and three-dimensional regions, we construct combined algorithms based on a combination of the method of boundary integral equations with the grid methods. We study the question of substantiation of the combined method of nonlinear magnetostatic problem without the preliminary discretization of equations and give some results on the convergence of iterative processes that arise in non-linear cases. We also discuss economical iterative processes and algorithms that solve boundary integral equations on certain surfaces. Finally, examples of numerical solutions of magnetostatic problems that arose when modelling the fields of electrophysical installations are given too. 14 refs.; 2 figs.; 1 tab

  13. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  14. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  15. Approximate Analytic Solutions for the Two-Phase Stefan Problem Using the Adomian Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Qin

    2014-01-01

    Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.

  16. Self-decomposition of radiochemicals. Principles, control, observations and effects

    International Nuclear Information System (INIS)

    Evans, E.A.

    1976-01-01

    The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)

  17. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  18. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  19. Frequency filtering decompositions for unsymmetric matrices and matrices with strongly varying coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, C.

    1996-12-31

    In 1992, Wittum introduced the frequency filtering decompositions (FFD), which yield a fast method for the iterative solution of large systems of linear equations. Based on this method, the tangential frequency filtering decompositions (TFFD) have been developed. The TFFD allow the robust and efficient treatment of matrices with strongly varying coefficients. The existence and the convergence of the TFFD can be shown for symmetric and positive definite matrices. For a large class of matrices, it is possible to prove that the convergence rate of the TFFD and of the FFD is independent of the number of unknowns. For both methods, schemes for the construction of frequency filtering decompositions for unsymmetric matrices have been developed. Since, in contrast to Wittums`s FFD, the TFFD needs only one test vector, an adaptive test vector can be used. The TFFD with respect to the adaptive test vector can be combined with other iterative methods, e.g. multi-grid methods, in order to improve the robustness of these methods. The frequency filtering decompositions have been successfully applied to the problem of the decontamination of a heterogeneous porous medium by flushing.

  20. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  1. Combined effects of leaf litter and soil microsite on decomposition process in arid rangelands.

    Science.gov (United States)

    Carrera, Analía Lorena; Bertiller, Mónica Beatriz

    2013-01-15

    The objective of this study was to analyze the combined effects of leaf litter quality and soil properties on litter decomposition and soil nitrogen (N) mineralization at conserved (C) and disturbed by sheep grazing (D) vegetation states in arid rangelands of the Patagonian Monte. It was hypothesized that spatial differences in soil inorganic-N levels have larger impact on decomposition processes of non-recalcitrant than recalcitrant leaf litter (low and high concentration of secondary compounds, respectively). Leaf litter and upper soil were extracted from modal size plant patches (patch microsite) and the associated inter-patch area (inter-patch microsite) in C and D. Leaf litter was pooled per vegetation state and soil was pooled combining vegetation state and microsite. Concentrations of N and secondary compounds in leaf litter and total and inorganic-N in soil were assessed at each pooled sample. Leaf litter decay and soil N mineralization at microsites of C and D were estimated in 160 microcosms incubated at field capacity (16 month). C soils had higher total N than D soils (0.58 and 0.41 mg/g, respectively). Patch soil of C and inter-patch soil of D exhibited the highest values of inorganic-N (8.8 and 8.4 μg/g, respectively). Leaf litter of C was less recalcitrant and decomposed faster than that of D. Non-recalcitrant leaf litter decay and induced soil N mineralization had larger variation among microsites (coefficients of variation = 25 and 41%, respectively) than recalcitrant leaf litter (coefficients of variation = 12 and 32%, respectively). Changes in the canopy structure induced by grazing disturbance increased leaf litter recalcitrance, and reduced litter decay and soil N mineralization, independently of soil N levels. This highlights the importance of the combined effects of soil and leaf litter properties on N cycling probably with consequences for vegetation reestablishment and dynamics, rangeland resistance and resilience with implications

  2. Proper generalized decompositions an introduction to computer implementation with Matlab

    CERN Document Server

    Cueto, Elías; Alfaro, Icíar

    2016-01-01

    This book is intended to help researchers overcome the entrance barrier to Proper Generalized Decomposition (PGD), by providing a valuable tool to begin the programming task. Detailed Matlab Codes are included for every chapter in the book, in which the theory previously described is translated into practice. Examples include parametric problems, non-linear model order reduction and real-time simulation, among others. Proper Generalized Decomposition (PGD) is a method for numerical simulation in many fields of applied science and engineering. As a generalization of Proper Orthogonal Decomposition or Principal Component Analysis to an arbitrary number of dimensions, PGD is able to provide the analyst with very accurate solutions for problems defined in high dimensional spaces, parametric problems and even real-time simulation. .

  3. rCUR: an R package for CUR matrix decomposition

    Directory of Open Access Journals (Sweden)

    Bodor András

    2012-05-01

    Full Text Available Abstract Background Many methods for dimensionality reduction of large data sets such as those generated in microarray studies boil down to the Singular Value Decomposition (SVD. Although singular vectors associated with the largest singular values have strong optimality properties and can often be quite useful as a tool to summarize the data, they are linear combinations of up to all of the data points, and thus it is typically quite hard to interpret those vectors in terms of the application domain from which the data are drawn. Recently, an alternative dimensionality reduction paradigm, CUR matrix decompositions, has been proposed to address this problem and has been applied to genetic and internet data. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Since they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn. Results We present an implementation to perform CUR matrix decompositions, in the form of a freely available, open source R-package called rCUR. This package will help users to perform CUR-based analysis on large-scale data, such as those obtained from different high-throughput technologies, in an interactive and exploratory manner. We show two examples that illustrate how CUR-based techniques make it possible to reduce significantly the number of probes, while at the same time maintaining major trends in data and keeping the same classification accuracy. Conclusions The package rCUR provides functions for the users to perform CUR-based matrix decompositions in the R environment. In gene expression studies, it gives an additional way of analysis of differential expression and discriminant gene selection based on the use of statistical leverage scores. These scores, which have been used historically in diagnostic regression

  4. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  5. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  6. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    Science.gov (United States)

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  7. Dynamic mode decomposition for plasma diagnostics and validation

    Science.gov (United States)

    Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.

    2018-05-01

    We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.

  8. Two Notes on Discrimination and Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Helena Skyt

    1998-01-01

    1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....

  9. Exterior domain problems and decomposition of tensor fields in weighted Sobolev spaces

    OpenAIRE

    Schwarz, Günter

    1996-01-01

    The Hodge decompOsition is a useful tool for tensor analysis on compact manifolds with boundary. This paper aims at generalising the decomposition to exterior domains G ⊂ IR n. Let L 2a Ω k(G) be the space weighted square integrable differential forms with weight function (1 + |χ|²)a, let d a be the weighted perturbation of the exterior derivative and δ a its adjoint. Then L 2a Ω k(G) splits into the orthogonal sum of the subspaces of the d a-exact forms with vanishi...

  10. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    Science.gov (United States)

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A TFETI domain decomposition solver for elastoplastic problems

    Czech Academy of Sciences Publication Activity Database

    Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.

    2014-01-01

    Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7

  12. Decomposition techniques in mathematical programming engineering and science applications

    CERN Document Server

    Conejo, Antonio J; Minguez, Roberto; Garcia-Bertrand, Raquel

    2006-01-01

    Optimization plainly dominates the design, planning, operation, and c- trol of engineering systems. This is a book on optimization that considers particular cases of optimization problems, those with a decomposable str- ture that can be advantageously exploited. Those decomposable optimization problems are ubiquitous in engineering and science applications. The book considers problems with both complicating constraints and complicating va- ables, and analyzes linear and nonlinear problems, with and without in- ger variables. The decomposition techniques analyzed include Dantzig-Wolfe, Benders, Lagrangian relaxation, Augmented Lagrangian decomposition, and others. Heuristic techniques are also considered. Additionally, a comprehensive sensitivity analysis for characterizing the solution of optimization problems is carried out. This material is particularly novel and of high practical interest. This book is built based on many clarifying, illustrative, and compu- tional examples, which facilitate the learning p...

  13. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  14. Domain decomposition methods for fluid dynamics

    International Nuclear Information System (INIS)

    Clerc, S.

    1995-01-01

    A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs

  15. Decomposition of atrazine traces in water by combination of non-thermal electrical discharge and adsorption on nanofiber membrane.

    Science.gov (United States)

    Vanraes, Patrick; Willems, Gert; Daels, Nele; Van Hulle, Stijn W H; De Clerck, Karen; Surmont, Pieter; Lynen, Frederic; Vandamme, Jeroen; Van Durme, Jim; Nikiforov, Anton; Leys, Christophe

    2015-04-01

    In recent decades, several types of persistent substances are detected in the aquatic environment at very low concentrations. Unfortunately, conventional water treatment processes are not able to remove these micropollutants. As such, advanced treatment methods are required to meet both current and anticipated maximally allowed concentrations. Plasma discharge in contact with water is a promising new technology, since it produces a wide spectrum of oxidizing species. In this study, a new type of reactor is tested, in which decomposition by atmospheric pulsed direct barrier discharge (pDBD) plasma is combined with micropollutant adsorption on a nanofiber polyamide membrane. Atrazine is chosen as model micropollutant with an initial concentration of 30 μg/L. While the H2O2 and O3 production in the reactor is not influenced by the presence of the membrane, there is a significant increase in atrazine decomposition when the membrane is added. With membrane, 85% atrazine removal can be obtained in comparison to only 61% removal without membrane, at the same experimental parameters. The by-products of atrazine decomposition identified by HPLC-MS are deethylatrazine and ammelide. Formation of these by-products is more pronounced when the membrane is added. These results indicate the synergetic effect of plasma discharge and pollutant adsorption, which is attractive for future applications of water treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  17. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  18. Focal decompositions for linear differential equations of the second order

    Directory of Open Access Journals (Sweden)

    L. Birbrair

    2003-01-01

    two-points problems to itself such that the image of the focal decomposition associated to the first equation is a focal decomposition associated to the second one. In this paper, we present a complete classification for linear second-order equations with respect to this equivalence relation.

  19. Image reconstruction of fluorescent molecular tomography based on the tree structured Schur complement decomposition

    Directory of Open Access Journals (Sweden)

    Wang Jiajun

    2010-05-01

    Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.

  20. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  1. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2017-11-01

    Full Text Available Sulfur hexafluoride (SF6 gas-insulated electrical equipment is widely used in high-voltage (HV and extra-high-voltage (EHV power systems. Partial discharge (PD and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES and infrared (IR spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Science.gov (United States)

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  3. Split-and-Combine Singular Value Decomposition for Large-Scale Matrix

    Directory of Open Access Journals (Sweden)

    Jengnan Tzeng

    2013-01-01

    Full Text Available The singular value decomposition (SVD is a fundamental matrix decomposition in linear algebra. It is widely applied in many modern techniques, for example, high- dimensional data visualization, dimension reduction, data mining, latent semantic analysis, and so forth. Although the SVD plays an essential role in these fields, its apparent weakness is the order three computational cost. This order three computational cost makes many modern applications infeasible, especially when the scale of the data is huge and growing. Therefore, it is imperative to develop a fast SVD method in modern era. If the rank of matrix is much smaller than the matrix size, there are already some fast SVD approaches. In this paper, we focus on this case but with the additional condition that the data is considerably huge to be stored as a matrix form. We will demonstrate that this fast SVD result is sufficiently accurate, and most importantly it can be derived immediately. Using this fast method, many infeasible modern techniques based on the SVD will become viable.

  4. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    Science.gov (United States)

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  5. Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization

    NARCIS (Netherlands)

    Simonetto, A.; Jamali-Rad, H.

    2015-01-01

    Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel

  6. Microbial decomposition and bio-remediation of chemical substances. Kagaku busshitsu no biseibutsu bunkai to bio remediation

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, M [Osaka University, Osaka (Japan). Faculty of Engineering

    1993-08-01

    This paper summarizes studies on evaluation of breeding and bio-degradability of decomposition bacteria in bio-remediation, and births and deaths of microorganisms. Structural genes in a phenol decomposition path were separated by means of shotgun cloning. The phe B genes having been taken out were inserted into parent stocks to produce combined stocks for use in phenol decomposition. With 100 mg/l of phenol, the combined stocks showed better performance in both decomposition and multiplication than the parent stocks. When the phenol concentration increases, the rate controlling process changes and loses its effect. Decomposition of trichloroethylene progressed quickly with combined stocks derived from phe A, a phenol decomposed gene. Separated polyvinyl alcohol (PVA) decomposing bacteria were used for PVA decomposition. As a result, it was found that microorganisms are required that utilize intermediately produced low-molecular compounds for multiplication. Combined stocks with E. coli C600 stocks inserted with phe B were prepared to discuss births and deaths of microorganisms in activated sludge. A number of findings was obtained. 6 refs., 10 figs.

  7. Application of Decomposition Methodology to Solve Integrated Process Design and Controller Design Problems for Reactor-Separator-Recycle System

    DEFF Research Database (Denmark)

    Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan; Gani, Rafiqul

    2010-01-01

    This paper presents the integrated process design and controller design (IPDC) for a reactor-separator-recycle (RSR) system and evaluates a decomposition methodology to solve the IPDC problem. Accordingly, the IPDC problem is solved by decomposing it into four hierarchical stages: (i) pre...... the design of a RSR system involving consecutive reactions, A B -> C and shown to provide effective solutions that satisfy design, control and cost criteria. The advantage of the proposed methodology is that it is systematic, makes use of thermodynamic-process knowledge and provides valuable insights......-analysis, (ii) design analysis, (iii) controller design analysis, and (iv) final selection and verification. The methodology makes use of thermodynamic-process insights and the reverse design approach to arrive at the final process-controller design decisions. The developed methodology is illustrated through...

  8. Sustainable fuel production by thermocatalytic decomposition of methane – A review

    Directory of Open Access Journals (Sweden)

    K. Srilatha

    2017-12-01

    Full Text Available Thermocatalytic Decomposition of Methane (TCD is a completely green single step technology for producing hydrogen and carbon nanomaterials. This paper review about the research in laboratory-scale on TCD, specifically the recent advances like co-feeding effect and regeneration of catalyst for enhancing the productivity of the entire process. Although a remarkable success on the laboratory-scale has been fulfilled, TCD for free greenhouse gas (GHG hydrogen production is still in its infancy. The necessity for commercialization of TCD is more than ever in the present-day condition of massive GHG emission. TCD generally studied over several types of catalysts, for example mono, bi, trimetallic, combination of metal–metal oxide, carbon and metal doped carbon catalysts. Catalyst Deactivation is the main problem found in TCD process. Regeneration of catalyst and co-feeding of methane with other hydrocarbon are the two main solutions placed helped in accordance to overcome deactivation problem. Higher amount of co-feed hydrocarbon in situ produce more amount of highly active carbon deposits which support further methane decomposition to produce extra hydrogen. The conversion rate of methane increases with increasing temperature and decreases with the flow rate in the co-feeding process in a comparable manner as observed in normal TCD. The presence of co-components in the post-reaction stream is an important challenge attempted in the co-feeding and regeneration. Keywords: Hydrogen, Catalysts, Thermocatalytic decomposition

  9. Primary decomposition of zero-dimensional ideals over finite fields

    Science.gov (United States)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  10. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  11. Tensor decompositions for the analysis of atomic resolution electron energy loss spectra

    Energy Technology Data Exchange (ETDEWEB)

    Spiegelberg, Jakob; Rusz, Ján [Department of Physics and Astronomy, Uppsala University, Box 516, S-751 20 Uppsala (Sweden); Pelckmans, Kristiaan [Department of Information Technology, Uppsala University, Box 337, S-751 05 Uppsala (Sweden)

    2017-04-15

    A selection of tensor decomposition techniques is presented for the detection of weak signals in electron energy loss spectroscopy (EELS) data. The focus of the analysis lies on the correct representation of the simulated spatial structure. An analysis scheme for EEL spectra combining two-dimensional and n-way decomposition methods is proposed. In particular, the performance of robust principal component analysis (ROBPCA), Tucker Decompositions using orthogonality constraints (Multilinear Singular Value Decomposition (MLSVD)) and Tucker decomposition without imposed constraints, canonical polyadic decomposition (CPD) and block term decompositions (BTD) on synthetic as well as experimental data is examined. - Highlights: • A scheme for compression and analysis of EELS or EDX data is proposed. • Several tensor decomposition techniques are presented for BSS on hyperspectral data. • Robust PCA and MLSVD are discussed for denoising of raw data.

  12. On the Use of Ashenhurst Decomposition Chart as an Alternative to ...

    African Journals Online (AJOL)

    ... the Ashenhurst decomposition chart is shown to be a mapping technique which solves this selection problem and enables the design of logic circuits with desirable attributes using multiplexers. The Ashenhurst decomposition chart also serves as a bridging technique between the map based and algorithmic based digital ...

  13. Displacement decomposition and parallelisation of the PCG method for elasticity problems

    Czech Academy of Sciences Publication Activity Database

    Blaheta, Radim; Jakl, Ondřej; Starý, Jiří

    1., 2/3/4 (2005), s. 183-191 ISSN 1742-7185 R&D Projects: GA AV ČR(CZ) IBS3086102 Institutional research plan: CEZ:AV0Z30860518 Keywords : finite element method * preconditioned conjugate gradient method * displacement decomposition Subject RIV: BA - General Mathematics

  14. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  15. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  16. Thermal decomposition of ammonium hexachloroosmate

    DEFF Research Database (Denmark)

    Asanova, T I; Kantor, Innokenty; Asanov, I. P.

    2016-01-01

    Structural changes of (NH4)2[OsCl6] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH4)2[OsCl6] transforms directly to meta...

  17. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  18. Bimetallic catalysts for HI decomposition in the iodine-sulfur thermochemical cycle

    International Nuclear Information System (INIS)

    Wang Laijun; Hu Songzhi; Xu Lufei; Li Daocai; Han Qi; Chen Songzhe; Zhang Ping; Xu Jingming

    2014-01-01

    Among the different kinds of thermochemical water-splitting cycles, the iodine-sulfur (IS) cycle has attracted more and more interest because it is one of the promising candidates for economical and massive hydrogen production. However, there still exist some science and technical problems to be solved before industrialization of the IS process. One such problem is the catalytic decomposition of hydrogen iodide. Although the active carbon supported platinum has been verified to present the excellent performance for HI decomposition, it is very expensive and easy to agglomerate under the harsh condition. In order to decrease the cost and increase the stability of the catalysts for HI decomposition, a series of bimetallic catalysts were prepared and studied at INET. This paper summarized our present research advances on the bimetallic catalysts (Pt-Pd, Pd-Ir and Pt-Ir) for HI decomposition. In the course of the study, the physical properties, structure, and morphology of the catalysts were characterized by specific surface area, X-ray diffractometer; and transmission electron microscopy, respectively. The catalytic activity for HI decomposition was investigated in a fixed bed reactor under atmospheric pressure. The results show that due to the higher activity and better stability, the active carbon supported bimetallic catalyst is more potential candidate than mono metallic Pt catalyst for HI decomposition in the IS thermochemical cycle. (author)

  19. Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms

    KAUST Repository

    Efendiev, Yalchin

    2012-02-22

    An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.

  20. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    Science.gov (United States)

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  1. Solution of the porous media equation by Adomian's decomposition method

    International Nuclear Information System (INIS)

    Pamuk, Serdal

    2005-01-01

    The particular exact solutions of the porous media equation that usually occurs in nonlinear problems of heat and mass transfer, and in biological systems are obtained using Adomian's decomposition method. Also, numerical comparison of particular solutions in the decomposition method indicate that there is a very good agreement between the numerical solutions and particular exact solutions in terms of efficiency and accuracy

  2. Spectral decomposition of MR spectroscopy signatures with use of eigenanalysis

    International Nuclear Information System (INIS)

    Hearshen, D.O.; Windham, J.P.; Roebuck, J.R.; Helpern, J.A.

    1989-01-01

    Partial-volume contamination and overlapping resonances are common problems in whole-body MR spectroscopy and can affect absolute or relative intensity and chemical-shift measurements. One technique, based on solution of constrained eigenvalue problems, treats spectra as N-dimensional signatures and minimizes contributions of undesired signatures while maximizing contributions of desired signatures in compromised spectra. Computer simulations and both high-resolution (400-MHz) and whole-body (63.8-MHz) phantom studies tested accuracy and reproducibility of spectral decomposition. Results demonstrated excellent decomposition and good reproducibility within certain constraints. The authors conclude that eigenanalysis may improve quantitation of spectra without introducing operator bias

  3. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  4. Efficient exact optimization of multi-objective redundancy allocation problems in series-parallel systems

    International Nuclear Information System (INIS)

    Cao, Dingzhou; Murat, Alper; Chinnam, Ratna Babu

    2013-01-01

    This paper proposes a decomposition-based approach to exactly solve the multi-objective Redundancy Allocation Problem for series-parallel systems. Redundancy allocation problem is a form of reliability optimization and has been the subject of many prior studies. The majority of these earlier studies treat redundancy allocation problem as a single objective problem maximizing the system reliability or minimizing the cost given certain constraints. The few studies that treated redundancy allocation problem as a multi-objective optimization problem relied on meta-heuristic solution approaches. However, meta-heuristic approaches have significant limitations: they do not guarantee that Pareto points are optimal and, more importantly, they may not identify all the Pareto-optimal points. In this paper, we treat redundancy allocation problem as a multi-objective problem, as is typical in practice. We decompose the original problem into several multi-objective sub-problems, efficiently and exactly solve sub-problems, and then systematically combine the solutions. The decomposition-based approach can efficiently generate all the Pareto-optimal solutions for redundancy allocation problems. Experimental results demonstrate the effectiveness and efficiency of the proposed method over meta-heuristic methods on a numerical example taken from the literature.

  5. Dual decomposition for parsing with non-projective head automata

    OpenAIRE

    Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander

    2010-01-01

    This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...

  6. DECOMPOSITION OF TARS IN MICROWAVE PLASMA – PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    Mateusz Wnukowski

    2014-07-01

    Full Text Available The paper refers to the main problem connected with biomass gasification - a presence of tar in a product gas. This paper presents preliminary results of tar decomposition in a microwave plasma reactor. It gives a basic insight into the construction and work of the plasma reactor. During the experiment, researches were carried out on toluene as a tar surrogate. As a carrier gas for toluene and as a plasma agent, nitrogen was used. Flow rates of the gases and the microwave generator’s power were constant during the whole experiment. Results of the experiment showed that the decomposition process of toluene was effective because the decomposition efficiency attained above 95%. The main products of tar decomposition were light hydrocarbons and soot. The article also gives plans for further research in a matter of tar removal from the product gas.

  7. Optimization Problems on Threshold Graphs

    Directory of Open Access Journals (Sweden)

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.

  8. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  9. Compactly supported frames for decomposition spaces

    DEFF Research Database (Denmark)

    Nielsen, Morten; Rasmussen, Kenneth Niemann

    2012-01-01

    In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...

  10. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  11. Local sensitivity analysis for inverse problems solved by singular value decomposition

    Science.gov (United States)

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by

  12. Spectral decomposition in advection-diffusion analysis by finite element methods

    International Nuclear Information System (INIS)

    Nickell, R.E.; Gartling, D.K.; Strang, G.

    1978-01-01

    In a recent study of the convergence properties of finite element methods in nonlinear fluid mechanics, an indirect approach was taken. A two-dimensional example with a known exact solution was chosen as the vehicle for the study, and various mesh refinements were tested in an attempt to extract information on the effect of the local Reynolds number. However, more direct approaches are usually preferred. In this study one such direct approach is followed, based upon the spectral decomposition of the solution operator. Spectral decomposition is widely employed as a solution technique for linear structural dynamics problems and can be applied readily to linear, transient heat transfer analysis; in this case, the extension to nonlinear problems is of interest. It was shown previously that spectral techniques were applicable to stiff systems of rate equations, while recent studies of geometrically and materially nonlinear structural dynamics have demonstrated the increased information content of the numerical results. The use of spectral decomposition in nonlinear problems of heat and mass transfer would be expected to yield equally increased flow of information to the analyst, and this information could include a quantitative comparison of various solution strategies, meshes, and element hierarchies

  13. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  14. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    Science.gov (United States)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  15. Domain Decomposition: A Bridge between Nature and Parallel Computers

    Science.gov (United States)

    1992-09-01

    B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic

  16. A Structural Model Decomposition Framework for Systems Health Management

    Science.gov (United States)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  17. A structural model decomposition framework for systems health management

    Science.gov (United States)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  18. Entanglement and tensor product decomposition for two fermions

    International Nuclear Information System (INIS)

    Caban, P; Podlaski, K; Rembielinski, J; Smolinski, K A; Walczak, Z

    2005-01-01

    The problem of the choice of tensor product decomposition in a system of two fermions with the help of Bogoliubov transformations of creation and annihilation operators is discussed. The set of physical states of the composite system is restricted by the superselection rule forbidding the superposition of fermions and bosons. It is shown that the Wootters concurrence is not the proper entanglement measure in this case. The explicit formula for the entanglement of formation is found. This formula shows that the entanglement of a given state depends on the tensor product decomposition of a Hilbert space. It is shown that the set of separable states is narrower than in the two-qubit case. Moreover, there exist states which are separable with respect to all tensor product decompositions of the Hilbert space. (letter to the editor)

  19. Combination of Wiener filtering and singular value decomposition filtering for volume imaging PET

    International Nuclear Information System (INIS)

    Shao, L.; Lewitt, R.M.; Karp, J.S.

    1995-01-01

    Although the three-dimensional (3D) multi-slice rebinning (MSRB) algorithm in PET is fast and practical, and provides an accurate reconstruction, the MSRB image, in general, suffers from the noise amplified by its singular value decomposition (SVD) filtering operation in the axial direction. Their aim in this study is to combine the use of the Wiener filter (WF) with the SVD to decrease the noise and improve the image quality. The SVD filtering ''deconvolves'' the spatially variant axial response function while the WF suppresses the noise and reduces the blurring not modeled by the axial SVD filter but included in the system modulation transfer function. Therefore, the synthesis of these two techniques combines the advantages of both filters. The authors applied this approach to the volume imaging HEAD PENN-PET brain scanner with an axial extent of 256 mm. This combined filter was evaluated in terms of spatial resolution, image contrast, and signal-to-noise ratio with several phantoms, such as a cold sphere phantom and 3D brain phantom. Specifically, the authors studied both the SVD filter with an axial Wiener filter and the SVD filter with a 3D Wiener filter, and compared the filtered images to those from the 3D reprojection (3DRP) reconstruction algorithm. Their results indicate that the Wiener filter increases the signal-to-noise ratio and also improves the contrast. For the MSRB images of the 3D brain phantom, after 3D WF, both the Gray/White and Gray/Ventricle ratios were improved from 1.8 to 2.8 and 2.1 to 4.1, respectively. In addition, the image quality with the MSRB algorithm is close to that of the 3DRP algorithm with 3D WF applied to both image reconstructions

  20. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    Science.gov (United States)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  1. Decomposition in conic optimization with partially separable structure

    DEFF Research Database (Denmark)

    Sun, Yifan; Andersen, Martin Skovgaard; Vandenberghe, Lieven

    2014-01-01

    Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general nonpolyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have...

  2. Domain decomposition methods and deflated Krylov subspace iterations

    NARCIS (Netherlands)

    Nabben, R.; Vuik, C.

    2006-01-01

    The balancing Neumann-Neumann (BNN) and the additive coarse grid correction (BPS) preconditioner are fast and successful preconditioners within domain decomposition methods for solving partial differential equations. For certain elliptic problems these preconditioners lead to condition numbers which

  3. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  4. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  5. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  6. Extraction of the fetal ECG in noninvasive recordings by signal decompositions

    International Nuclear Information System (INIS)

    Christov, I; Simova, I; Abächerli, R

    2014-01-01

    No signal processing technique has been able to reliably deliver an undistorted fetal electrocardiographic (fECG) signal from electrodes placed on the maternal abdomen because of the low signal-to-noise ratio of the fECG recorded from the maternal body surface. As a result, this led to increased rates of Caesarean deliveries of healthy infants. In an attempt to solve the problem, Physionet/Computing in Cardiology announced the 2013 Challenge: noninvasive fetal ECG. We are suggesting a method for cancellation of the maternal ECG consisting of: maternal QRS detection, heart rate dependant P-QRS-T interval selection, location of the fiducial points inside this interval for best matching by cross correlation, superimposition of the intervals, calculation of the mean signal of the P-QRS-T interval, and sequential subtraction of the mean signal from the whole fECG recording. Three signal decomposition methods were further applied in order to enhance the fetal QRSs (fQRS): principal component analysis, root-mean-square and Hotelling’s T-squared. A combined lead of all decompositions was synthesized and fQRS detection was performed on it. The current research differs from the Challenge in that it uses three signal decomposition methods to enhance the fECG. The new results for 97 recordings of test set B are: 305.657 for Event 4: Fetal heart rate (FHR) and 23.062 for Event 5: Fetal RR interval (FRR). (paper)

  7. Accelerated decomposition techniques for large discounted Markov decision processes

    Science.gov (United States)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  8. Combination of Empirical Mode Decomposition Components of HRV Signals for Discriminating Emotional States

    Directory of Open Access Journals (Sweden)

    Ateke Goshvarpour

    2016-06-01

    Full Text Available Introduction Automatic human emotion recognition is one of the most interesting topics in the field of affective computing. However, development of a reliable approach with a reasonable recognition rate is a challenging task. The main objective of the present study was to propose a robust method for discrimination of emotional responses thorough examination of heart rate variability (HRV. In the present study, considering the non-stationary and non-linear characteristics of HRV, empirical mode decomposition technique was utilized as a feature extraction approach. Materials and Methods In order to induce the emotional states, images indicating four emotional states, i.e., happiness, peacefulness, sadness, and fearfulness were presented. Simultaneously, HRV was recorded in 47 college students. The signals were decomposed into some intrinsic mode functions (IMFs. For each IMF and different IMF combinations, 17 standard and non-linear parameters were extracted. Wilcoxon test was conducted to assess the difference between IMF parameters in different emotional states. Afterwards, a probabilistic neural network was used to classify the features into emotional classes. Results Based on the findings, maximum classification rates were achieved when all IMFs were fed into the classifier. Under such circumstances, the proposed algorithm could discriminate the affective states with sensitivity, specificity, and correct classification rate of 99.01%, 100%, and 99.09%, respectively. In contrast, the lowest discrimination rates were attained by IMF1 frequency and its combinations. Conclusion The high performance of the present approach indicated that the proposed method is applicable for automatic emotion recognition.

  9. Rayleigh-Schrödinger series and Birkhoff decomposition

    Science.gov (United States)

    Novelli, Jean-Christophe; Paul, Thierry; Sauzin, David; Thibon, Jean-Yves

    2018-01-01

    We derive new expressions for the Rayleigh-Schrödinger series describing the perturbation of eigenvalues of quantum Hamiltonians. The method, somehow close to the so-called dimensional renormalization in quantum field theory, involves the Birkhoff decomposition of some Laurent series built up out of explicit fully non-resonant terms present in the usual expression of the Rayleigh-Schrödinger series. Our results provide new combinatorial formulae and a new way of deriving perturbation series in quantum mechanics. More generally we prove that such a decomposition provides solutions of general normal form problems in Lie algebras.

  10. Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors

    Science.gov (United States)

    Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea

    2018-03-01

    In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.

  11. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  12. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Elizondo, Marcelo A.; Samaan, Nader A.

    2017-10-19

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage control problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.

  13. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  14. Violent societies: an application of orbital decomposition to the problem of human violence.

    Science.gov (United States)

    Spohn, M

    2008-01-01

    This study uses orbital decomposition to analyze the patterns of how governments lose their monopolies on violence, therefore allowing those societies to descend into violent states from which it is difficult to recover. The nonlinear progression by which the governing body loses its monopoly is based on the work of criminologist Lonnie Athens and applied from the individual to the societal scale. Four different kinds of societies are considered: Those where the governing body is both unwilling and unable to assert its monopoly on violence (former Yugoslavia); where it is unwilling (Peru); where it is unable (South Africa); and a smaller pocket of violent society within a larger, more stable one (Gujarat). In each instance, orbital decomposition turns up insights not apparent in the qualitative data or through linear statistical analysis, both about the nature of the descent into violence and about the progression itself.

  15. Application of Homotopy-Perturbation Method to Nonlinear Ozone Decomposition of the Second Order in Aqueous Solutions Equations

    DEFF Research Database (Denmark)

    Ganji, D.D; Miansari, Mo; B, Ganjavi

    2008-01-01

    In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions are consid......In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions...

  16. Strongly \\'etale difference algebras and Babbitt's decomposition

    OpenAIRE

    Tomašić, Ivan; Wibmer, Michael

    2015-01-01

    We introduce a class of strongly \\'{e}tale difference algebras, whose role in the study of difference equations is analogous to the role of \\'{e}tale algebras in the study of algebraic equations. We deduce an improved version of Babbitt's decomposition theorem and we present applications to difference algebraic groups and the compatibility problem.

  17. On low-rank updates to the singular value and Tucker decompositions

    Energy Technology Data Exchange (ETDEWEB)

    O' Hara, M J

    2009-10-06

    The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.

  18. Direct NO decomposition over stepped transition-metal surfaces

    DEFF Research Database (Denmark)

    Falsig, Hanne; Bligaard, Thomas; Christensen, Claus H.

    2007-01-01

    We establish the full potential energy diagram for the direct NO decomposition reaction over stepped transition-metal surfaces by combining a database of adsorption energies on stepped metal surfaces with known Bronsted-Evans-Polanyi (BEP) relations for the activation barriers of dissociation...

  19. Using normalized equations to solve the indetermination problem in the Oaxaca-Blinder decomposition: an application to the gender wage gap in Brazil

    Directory of Open Access Journals (Sweden)

    Luiz Guilherme Scorzafave

    2007-12-01

    Full Text Available There are hundreds of works that implement the Oaxaca-Blinder decomposition. However, this decomposition is not invariant to the choice of reference group when dummy variables are used. This paper applies the solution proposed by Yun (005a,b for this identification problem to Brazilian gender wage gap estimation. Our principal finding is the increasing difference in part-time work coefficients between men and women, which contributes to narrow the gender wage gap. Other studies in Brazil not using any correction of the identification problem have found different results.Há centenas de trabalhos que implementam a decomposição de Oaxaca-Blinder. Entretanto, esta decomposição não é invariante à escolha dos grupos de referência quando variáveis binárias são utilizadas como regressores. Este artigo aplica a solução proposta por Yun (005a,b para este problema de identificação à estimação do diferencial de salários por sexo no Brasil. A crescente diferença entre homens e mulheres no coeficiente da regressão associado ao trabalho em meio período vem contribuindo para reduzir o diferencial de salários por sexo. Outros estudos já realizados no Brasil que não utilizaram qualquer correção do problema de identificação, encontraram resultados diferentes.

  20. Domain decomposition methods for core calculations using the MINOS solver

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2007-01-01

    Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)

  1. Detailed RIF decomposition with selection : the gender pay gap in Italy

    OpenAIRE

    Töpfer, Marina

    2017-01-01

    In this paper, we estimate the gender pay gap along the wage distribution using a detailed decomposition approach based on unconditional quantile regressions. Non-randomness of the sample leads to biased and inconsistent estimates of the wage equation as well as of the components of the wage gap. Therefore, the method is extended to account for sample selection problems. The decomposition is conducted by using Italian microdata. Accounting for labor market selection may be particularly rele...

  2. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  3. Microbial community assembly and metabolic function during mammalian corpse decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.; Lax, S.; Van Treuren, W.; Hyde, E. R.; Song, S. J.; Amir, A.; Larsen, P.; Sangwan, N.; Haarmann, D.; Humphrey, G. C.; Ackermann, G.; Thompson, L. R.; Lauber, C.; Bibat, A.; Nicholas, C.; Gebert, M. J.; Petrosino, J. F.; Reed, S. C.; Gilbert, J. A.; Lynne, A. M.; Bucheli, S. R.; Carter, D. O.; Knight, R.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  4. Microbial community assembly and metabolic function during mammalian corpse decomposition

    Science.gov (United States)

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  5. A New Formulation for the Combined Maritime Fleet Deployment and Inventory Management Problem

    OpenAIRE

    Dong, Bo; Bektas, Tolga; Chandra, Saurabh; Christiansen, Marielle; Fagerholt, Kjetil

    2017-01-01

    This paper addresses the fleet deployment problem and in particular the treatment of inventory in the maritime case. A new model based on time-continuous formulation for the combined maritime fleet deployment and inventory management problem in Roll-on Roll-off shipping is presented. Tests based on realistic data from the Ro-Ro business show that the model yields good solutions to the combined problem within reasonable time.

  6. Basis material decomposition method for material discrimination with a new spectrometric X-ray imaging detector

    Science.gov (United States)

    Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.

    2017-08-01

    Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show

  7. The role of energy-service demand reduction in global climate change mitigation: Combining energy modelling and decomposition analysis

    International Nuclear Information System (INIS)

    Kesicki, Fabian; Anandarajah, Gabrial

    2011-01-01

    In order to reduce energy-related CO 2 emissions different options have been considered: energy efficiency improvements, structural changes to low carbon or zero carbon fuel/technologies, carbon sequestration, and reduction in energy-service demands (useful energy). While efficiency and technology options have been extensively studied within the context of climate change mitigation, this paper addresses the possible role of price-related energy-service demand reduction. For this analysis, the elastic demand version of the TIAM-UCL global energy system model is used in combination with decomposition analysis. The results of the CO 2 emission decomposition indicate that a reduction in energy-service demand can play a limited role, contributing around 5% to global emission reduction in the 21st century. A look at the sectoral level reveals that the demand reduction can play a greater role in selected sectors like transport contributing around 16% at a global level. The societal welfare loss is found to be high when the price elasticity of demand is low. - Highlights: → A reduction in global energy-service demand can contribute around 5% to global emission reduction in the 21st century. → The role of demand is a lot higher in transport than in the residential sector. → Contribution of demand reduction is higher in early periods of the 21st century. → Societal welfare loss is found to be high when the price elasticity of demand is low. → Regional shares in residual emissions vary under different elasticity scenarios.

  8. The role of energy-service demand reduction in global climate change mitigation: Combining energy modelling and decomposition analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kesicki, Fabian, E-mail: fabian.kesicki.09@ucl.ac.uk [UCL Energy Institute, University College London, 14 Upper Woburn Place, London, WC1H 0NN (United Kingdom); Anandarajah, Gabrial [UCL Energy Institute, University College London, 14 Upper Woburn Place, London, WC1H 0NN (United Kingdom)

    2011-11-15

    In order to reduce energy-related CO{sub 2} emissions different options have been considered: energy efficiency improvements, structural changes to low carbon or zero carbon fuel/technologies, carbon sequestration, and reduction in energy-service demands (useful energy). While efficiency and technology options have been extensively studied within the context of climate change mitigation, this paper addresses the possible role of price-related energy-service demand reduction. For this analysis, the elastic demand version of the TIAM-UCL global energy system model is used in combination with decomposition analysis. The results of the CO{sub 2} emission decomposition indicate that a reduction in energy-service demand can play a limited role, contributing around 5% to global emission reduction in the 21st century. A look at the sectoral level reveals that the demand reduction can play a greater role in selected sectors like transport contributing around 16% at a global level. The societal welfare loss is found to be high when the price elasticity of demand is low. - Highlights: > A reduction in global energy-service demand can contribute around 5% to global emission reduction in the 21st century. > The role of demand is a lot higher in transport than in the residential sector. > Contribution of demand reduction is higher in early periods of the 21st century. > Societal welfare loss is found to be high when the price elasticity of demand is low. > Regional shares in residual emissions vary under different elasticity scenarios.

  9. Combining FMEA with DEMATEL models to solve production process problems.

    Science.gov (United States)

    Tsai, Sang-Bing; Zhou, Jie; Gao, Yang; Wang, Jiangtao; Li, Guodong; Zheng, Yuxiang; Ren, Peng; Xu, Wei

    2017-01-01

    Failure mode and effects analysis (FMEA) is an analysis tool for identifying and preventing flaws or defects in products during the design and process planning stage, preventing the repeated occurrence of problems, reducing the effects of these problems, enhancing product quality and reliability, saving costs, and improving competitiveness. However, FMEA can only analyze one influence factor according to its priority, rendering this method ineffective for systems containing multiple FMs whose effects are simultaneous or interact with one another. Accordingly, when FMEA fails to identify the influence factors and the factors being influenced, the most crucial problems may be placed in lower priority or remain unresolved. Decision-Making Trial and Evaluation Laboratory (DEMATEL) facilitates the determination of cause and effect factors; by identifying the causal factors that should be prioritized, prompt and effective solutions to core problems can be derived, thereby enhancing performance. Using the photovoltaic cell manufacturing industry in China as the research target, the present study combined FMEA with DEMATEL to amend the flaws of FMEA and enhance its effectiveness. First, FMEA was used to identify items requiring improvement. Then, DEMATEL was employed to examine the interactive effects and causal relationships of these items. Finally, the solutions to the problems were prioritized. The proposed method effectively combined the advantages of FMEA and DEMATEL to facilitate the identification of core problems and prioritization of solutions in the Chinese photovoltaic cell industry.

  10. Tropical herbivorous phasmids, but not litter snails, alter decomposition rates by modifying litter bacteria

    Science.gov (United States)

    Chelse M. Prather; Gary E. Belovsky; Sharon A. Cantrell; Grizelle González

    2018-01-01

    Consumers can alter decomposition rates through both feces and selective feeding in many ecosystems, but these combined effects have seldom been examined in tropical ecosystems. Members of the detrital food web (litter-feeders or microbivores) should presumably have greater effects on decomposition than herbivores, members of the green food web. Using litterbag...

  11. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  12. Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms

    KAUST Repository

    Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Willems, Joerg

    2012-01-01

    An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract

  13. A Novel Generation Method for the PV Power Time Series Combining the Decomposition Technique and Markov Chain Theory

    DEFF Research Database (Denmark)

    Xu, Shenzhi; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    Photovoltaic (PV) power generation has made considerable developments in recent years. But its intermittent and volatility of its output has seriously affected the security operation of the power system. In order to better understand the PV generation and provide sufficient data support...... for analysis the impacts, a novel generation method for PV power time series combining decomposition technique and Markov chain theory is presented in this paper. It digs important factors from historical data from existing PV plants and then reproduce new data with similar patterns. In detail, the proposed...... method first decomposes the PV power time series into ideal output curve, amplitude parameter series and random fluctuating component three parts. Then generating daily ideal output curve by the extraction of typical daily data, amplitude parameter series based on the Markov chain Monte Carlo (MCMC...

  14. Domain decomposition multigrid for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Yair

    1997-01-01

    A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.

  15. Effect of Isomorphous Substitution on the Thermal Decomposition Mechanism of Hydrotalcites

    Directory of Open Access Journals (Sweden)

    Sergio Crosby

    2014-10-01

    Full Text Available Hydrotalcites have many important applications in catalysis, wastewater treatment, gene delivery and polymer stabilization, all depending on preparation history and treatment scenarios. In catalysis and polymer stabilization, thermal decomposition is of great importance. Hydrotalcites form easily with atmospheric carbon dioxide and often interfere with the study of other anion containing systems, particularly if formed at room temperature. The dehydroxylation and decomposition of carbonate occurs simultaneously, making it difficult to distinguish the dehydroxylation mechanisms directly. To date, the majority of work on understanding the decomposition mechanism has utilized hydrotalcite precipitated at room temperature. In this study, evolved gas analysis combined with thermal analysis has been used to show that CO2 contamination is problematic in materials being formed at RT that are poorly crystalline. This has led to some dispute as to the nature of the dehydroxylation mechanism. In this paper, data for the thermal decomposition of the chloride form of hydrotalcite are reported. In addition, carbonate-free hydrotalcites have been synthesized with different charge densities and at different growth temperatures. This combination of parameters has allowed a better understanding of the mechanism of dehydroxylation and the role that isomorphous substitution plays in these mechanisms to be delineated. In addition, the effect of anion type on thermal stability is also reported. A stepwise dehydroxylation model is proposed that is mediated by the level of aluminum substitution.

  16. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient.

    Science.gov (United States)

    Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing

    2017-12-26

    As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  17. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Yuxing Li

    2017-12-01

    Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  18. Design of tailor-made chemical blend using a decomposition-based computer-aided approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.

    2011-01-01

    Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...

  19. Combining FMEA with DEMATEL models to solve production process problems.

    Directory of Open Access Journals (Sweden)

    Sang-Bing Tsai

    Full Text Available Failure mode and effects analysis (FMEA is an analysis tool for identifying and preventing flaws or defects in products during the design and process planning stage, preventing the repeated occurrence of problems, reducing the effects of these problems, enhancing product quality and reliability, saving costs, and improving competitiveness. However, FMEA can only analyze one influence factor according to its priority, rendering this method ineffective for systems containing multiple FMs whose effects are simultaneous or interact with one another. Accordingly, when FMEA fails to identify the influence factors and the factors being influenced, the most crucial problems may be placed in lower priority or remain unresolved. Decision-Making Trial and Evaluation Laboratory (DEMATEL facilitates the determination of cause and effect factors; by identifying the causal factors that should be prioritized, prompt and effective solutions to core problems can be derived, thereby enhancing performance. Using the photovoltaic cell manufacturing industry in China as the research target, the present study combined FMEA with DEMATEL to amend the flaws of FMEA and enhance its effectiveness. First, FMEA was used to identify items requiring improvement. Then, DEMATEL was employed to examine the interactive effects and causal relationships of these items. Finally, the solutions to the problems were prioritized. The proposed method effectively combined the advantages of FMEA and DEMATEL to facilitate the identification of core problems and prioritization of solutions in the Chinese photovoltaic cell industry.

  20. Combining FMEA with DEMATEL models to solve production process problems

    Science.gov (United States)

    Tsai, Sang-Bing; Zhou, Jie; Gao, Yang; Wang, Jiangtao; Li, Guodong; Zheng, Yuxiang; Ren, Peng; Xu, Wei

    2017-01-01

    Failure mode and effects analysis (FMEA) is an analysis tool for identifying and preventing flaws or defects in products during the design and process planning stage, preventing the repeated occurrence of problems, reducing the effects of these problems, enhancing product quality and reliability, saving costs, and improving competitiveness. However, FMEA can only analyze one influence factor according to its priority, rendering this method ineffective for systems containing multiple FMs whose effects are simultaneous or interact with one another. Accordingly, when FMEA fails to identify the influence factors and the factors being influenced, the most crucial problems may be placed in lower priority or remain unresolved. Decision-Making Trial and Evaluation Laboratory (DEMATEL) facilitates the determination of cause and effect factors; by identifying the causal factors that should be prioritized, prompt and effective solutions to core problems can be derived, thereby enhancing performance. Using the photovoltaic cell manufacturing industry in China as the research target, the present study combined FMEA with DEMATEL to amend the flaws of FMEA and enhance its effectiveness. First, FMEA was used to identify items requiring improvement. Then, DEMATEL was employed to examine the interactive effects and causal relationships of these items. Finally, the solutions to the problems were prioritized. The proposed method effectively combined the advantages of FMEA and DEMATEL to facilitate the identification of core problems and prioritization of solutions in the Chinese photovoltaic cell industry. PMID:28837663

  1. Combined Noncyclic Scheduling and Advanced Control for Continuous Chemical Processes

    Directory of Open Access Journals (Sweden)

    Damon Petersen

    2017-12-01

    Full Text Available A novel formulation for combined scheduling and control of multi-product, continuous chemical processes is introduced in which nonlinear model predictive control (NMPC and noncyclic continuous-time scheduling are efficiently combined. A decomposition into nonlinear programming (NLP dynamic optimization problems and mixed-integer linear programming (MILP problems, without iterative alternation, allows for computationally light solution. An iterative method is introduced to determine the number of production slots for a noncyclic schedule during a prediction horizon. A filter method is introduced to reduce the number of MILP problems required. The formulation’s closed-loop performance with both process disturbances and updated market conditions is demonstrated through multiple scenarios on a benchmark continuously stirred tank reactor (CSTR application with fluctuations in market demand and price for multiple products. Economic performance surpasses cyclic scheduling in all scenarios presented. Computational performance is sufficiently light to enable online operation in a dual-loop feedback structure.

  2. G-Doob-Meyer Decomposition and Its Applications in Bid-Ask Pricing for Derivatives under Knightian Uncertainty

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2015-01-01

    Full Text Available The target of this paper is to establish the bid-ask pricing framework for the American contingent claims against risky assets with G-asset price systems on the financial market under Knightian uncertainty. First, we prove G-Dooby-Meyer decomposition for G-supermartingale. Furthermore, we consider bid-ask pricing American contingent claims under Knightian uncertainty, by using G-Dooby-Meyer decomposition; we construct dynamic superhedge strategies for the optimal stopping problem and prove that the value functions of the optimal stopping problems are the bid and ask prices of the American contingent claims under Knightian uncertainty. Finally, we consider a free boundary problem, prove the strong solution existence of the free boundary problem, and derive that the value function of the optimal stopping problem is equivalent to the strong solution to the free boundary problem.

  3. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  4. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  5. Adomian decomposition method for solving the telegraph equation in charged particle transport

    International Nuclear Information System (INIS)

    Abdou, M.A.

    2005-01-01

    In this paper, the analysis for the telegraph equation in case of isotropic small angle scattering from the Boltzmann transport equation for charged particle is presented. The Adomian decomposition is used to solve the telegraph equation. By means of MAPLE the Adomian polynomials of obtained series (ADM) solution have been calculated. The behaviour of the distribution function are shown graphically. The results reported in this article provide further evidence of the usefulness of Adomain decomposition for obtaining solution of linear and nonlinear problems

  6. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  7. Urban-area extraction from polarimetric SAR image using combination of target decomposition and orientation angle

    Science.gov (United States)

    Zou, Bin; Lu, Da; Wu, Zhilu; Qiao, Zhijun G.

    2016-05-01

    The results of model-based target decomposition are the main features used to discriminate urban and non-urban area in polarimetric synthetic aperture radar (PolSAR) application. Traditional urban-area extraction methods based on modelbased target decomposition usually misclassified ground-trunk structure as urban-area or misclassified rotated urbanarea as forest. This paper introduces another feature named orientation angle to improve urban-area extraction scheme for the accurate mapping in urban by PolSAR image. The proposed method takes randomness of orientation angle into account for restriction of urban area first and, subsequently, implements rotation angle to improve results that oriented urban areas are recognized as double-bounce objects from volume scattering. ESAR L-band PolSAR data of the Oberpfaffenhofen Test Site Area was used to validate the proposed algorithm.

  8. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Oxberry, Geoffrey M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kostova-Vassilevska, Tanya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arrighi, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chand, Kyle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  9. Solving a combined cutting-stock and lot-sizing problem with a column generating procedure

    DEFF Research Database (Denmark)

    Nonås, Sigrid Lise; Thorstenson, Anders

    2008-01-01

    In Nonås and Thorstenson [A combined cutting stock and lot sizing problem. European Journal of Operational Research 120(2) (2000) 327-42] a combined cutting-stock and lot-sizing problem is outlined under static and deterministic conditions. In this paper we suggest a new column generating solutio...... indicate that the procedure works well also for the extended cutting-stock problem with only a setup cost for each pattern change....

  10. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  11. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  12. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  13. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    International Nuclear Information System (INIS)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T.

    2014-01-01

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N 2 . - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N 2 , NH 3 , HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N 2

  14. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    Science.gov (United States)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  15. Combination of canonical correlation analysis and empirical mode decomposition applied to denoising the labor electrohysterogram.

    Science.gov (United States)

    Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2011-09-01

    The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.

  16. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  17. Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.

  18. Mapping of Natural Radionuclides using Noise Adjusted Singular Value Decomposition, NASVD

    DEFF Research Database (Denmark)

    Aage, Helle Karina

    2006-01-01

    Mapping of natural radionuclides from airborne gamma spectrometry suffer from random ”noise” in the spectra due to short measurement times. This is partly compensated for by using large volume detectors to improve the counting statistics. One method of further improving the quality of the measured...... spectra is to remove from the spectra a large fraction of this random noise using a special variant of Singular Value Decomposition: Noise Adjusted Singular Value Decomposition. In 1997-1999 the natural radionuclides on the Danish Island of Bornholm were mapped using a combination of the standard 3...

  19. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    Energy Technology Data Exchange (ETDEWEB)

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  20. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  1. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  2. Multisensors Cooperative Detection Task Scheduling Algorithm Based on Hybrid Task Decomposition and MBPSO

    Directory of Open Access Journals (Sweden)

    Changyun Liu

    2017-01-01

    Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.

  3. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  4. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    Science.gov (United States)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  5. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  6. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  7. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  8. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    Science.gov (United States)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  9. Investigations on the remains of polycyclic aromatic hydrocarbons in contaminated soils after the addition of micro-organisms active in decomposition

    International Nuclear Information System (INIS)

    Mahro, B.; Kaestner, M.; Breuer-Jammali, M.; Schaefer, G.; Kasche, V.

    1993-01-01

    The microbial decomposition of polycyclic aromatic hydrocarbons (PAH's) by bacteria and fungi was previously mainly examined in liquid cultures. In examining the microbial decomposition of PAH's in the soil, analytical problems arise. The disappearance of the PAH's or other xenobiotica in soil cultures cannot be regarded as equivalent to biological decomposition, as absorption phenomena in the soil also have to be regarded. (orig.) [de

  10. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    Science.gov (United States)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  11. Quantitative lung perfusion evaluation using Fourier decomposition perfusion MRI.

    Science.gov (United States)

    Kjørstad, Åsmund; Corteville, Dominique M R; Fischer, Andre; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R

    2014-08-01

    To quantitatively evaluate lung perfusion using Fourier decomposition perfusion MRI. The Fourier decomposition (FD) method is a noninvasive method for assessing ventilation- and perfusion-related information in the lungs, where the perfusion maps in particular have shown promise for clinical use. However, the perfusion maps are nonquantitative and dimensionless, making follow-ups and direct comparisons between patients difficult. We present an approach to obtain physically meaningful and quantifiable perfusion maps using the FD method. The standard FD perfusion images are quantified by comparing the partially blood-filled pixels in the lung parenchyma with the fully blood-filled pixels in the aorta. The percentage of blood in a pixel is then combined with the temporal information, yielding quantitative blood flow values. The values of 10 healthy volunteers are compared with SEEPAGE measurements which have shown high consistency with dynamic contrast enhanced-MRI. All pulmonary blood flow (PBF) values are within the expected range. The two methods are in good agreement (mean difference = 0.2 mL/min/100 mL, mean absolute difference = 11 mL/min/100 mL, mean PBF-FD = 150 mL/min/100 mL, mean PBF-SEEPAGE = 151 mL/min/100 mL). The Bland-Altman plot shows a good spread of values, indicating no systematic bias between the methods. Quantitative lung perfusion can be obtained using the Fourier Decomposition method combined with a small amount of postprocessing. Copyright © 2013 Wiley Periodicals, Inc.

  12. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  13. A 3D domain decomposition approach for the identification of spatially varying elastic material parameters

    KAUST Repository

    Moussawi, Ali

    2015-02-24

    Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.

  14. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  15. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Asieh Mansouri

    2018-01-01

    Full Text Available Background Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA that was measured using LogMAR (logarithm of the minimum angle of resolution. The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212. The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people

  16. Canonical Polyadic Decomposition With Auxiliary Information for Brain-Computer Interface.

    Science.gov (United States)

    Li, Junhua; Li, Chao; Cichocki, Andrzej

    2017-01-01

    Physiological signals are often organized in the form of multiple dimensions (e.g., channel, time, task, and 3-D voxel), so it is better to preserve original organization structure when processing. Unlike vector-based methods that destroy data structure, canonical polyadic decomposition (CPD) aims to process physiological signals in the form of multiway array, which considers relationships between dimensions and preserves structure information contained by the physiological signal. Nowadays, CPD is utilized as an unsupervised method for feature extraction in a classification problem. After that, a classifier, such as support vector machine, is required to classify those features. In this manner, classification task is achieved in two isolated steps. We proposed supervised CPD by directly incorporating auxiliary label information during decomposition, by which a classification task can be achieved without an extra step of classifier training. The proposed method merges the decomposition and classifier learning together, so it reduces procedure of classification task compared with that of respective decomposition and classification. In order to evaluate the performance of the proposed method, three different kinds of signals, synthetic signal, EEG signal, and MEG signal, were used. The results based on evaluations of synthetic and real signals demonstrated that the proposed method is effective and efficient.

  17. The Implementation of Physics Problem Solving Strategy Combined with Concept Map in General Physics Course

    Science.gov (United States)

    Hidayati, H.; Ramli, R.

    2018-04-01

    This paper aims to provide a description of the implementation of Physic Problem Solving strategy combined with concept maps in General Physics learning at Department of Physics, Universitas Negeri Padang. Action research has been conducted in two cycles where each end of the cycle is reflected and improved for the next cycle. Implementation of Physics Problem Solving strategy combined with concept map can increase student activity in solving general physics problem with an average increase of 15% and can improve student learning outcomes from 42,7 in the cycle I become 62,7 in cycle II in general physics at the Universitas Negeri Padang. In the future, the implementation of Physic Problem Solving strategy combined with concept maps will need to be considered in Physics courses.

  18. Oxidative decomposition of aromatic hydrocarbons by electron beam irradiation

    Science.gov (United States)

    Han, Do-Hung; Stuchinskaya, Tatiana; Won, Yang-Soo; Park, Wan-Sik; Lim, Jae-Kyong

    2003-05-01

    Decomposition of aromatic volatile organic compounds (VOCs) under electron beam irradiation was studied in order to examine the kinetics of the process, to characterize the reaction product distribution and to develop a process of waste gas control technology. Toluene, ethylbenzene, o-, m-, p-xylenes and chlorobenzene were used as target materials. The experiments were carried out at doses ranging from 0.5 to 10 kGy, using a flow reactor utilized under electron beam irradiation. Maximum degrees of decomposition carried out at 10 kGy in air environment were 55-65% for “non-chlorinated” aromatic VOC and 85% for chlorobenzene. It was found that a combination of aromatic pollutants with chlorobenzene would considerably increase the degradation value up to nearly 50% compared to the same compounds in the absence of chlorine groups. Based on our experimental observation, the degradation mechanism of the aromatic compounds combined with chloro-compound suggests that a chlorine radical, formed from EB irradiation, induces a chain reaction, resulting in an accelerating oxidative destruction of aromatic VOCs.

  19. Automatic Combination of Operators in a Genetic Algorithm to Solve the Traveling Salesman Problem.

    Directory of Open Access Journals (Sweden)

    Carlos Contreras-Bolton

    Full Text Available Genetic algorithms are powerful search methods inspired by Darwinian evolution. To date, they have been applied to the solution of many optimization problems because of the easy use of their properties and their robustness in finding good solutions to difficult problems. The good operation of genetic algorithms is due in part to its two main variation operators, namely, crossover and mutation operators. Typically, in the literature, we find the use of a single crossover and mutation operator. However, there are studies that have shown that using multi-operators produces synergy and that the operators are mutually complementary. Using multi-operators is not a simple task because which operators to use and how to combine them must be determined, which in itself is an optimization problem. In this paper, it is proposed that the task of exploring the different combinations of the crossover and mutation operators can be carried out by evolutionary computing. The crossover and mutation operators used are those typically used for solving the traveling salesman problem. The process of searching for good combinations was effective, yielding appropriate and synergic combinations of the crossover and mutation operators. The numerical results show that the use of the combination of operators obtained by evolutionary computing is better than the use of a single operator and the use of multi-operators combined in the standard way. The results were also better than those of the last operators reported in the literature.

  20. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    Science.gov (United States)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  1. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    Science.gov (United States)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  2. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  3. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  4. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    Czech Academy of Sciences Publication Activity Database

    Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008

  5. Task decomposition for multilimbed robots to work in the reachable-but-unorientable space

    Science.gov (United States)

    Su, Chao; Zheng, Yuan F.

    1990-01-01

    Multilimbed industrial robots that have at least one arm and two or more legs are suggested for enlarging robot workspace in industrial automation. To plan the motion of a multilimbed robot, the arm-leg motion-coordination problem is raised and task decomposition is proposed to solve the problem; that is, a given task described by the destination position and orientation of the end-effector is decomposed into subtasks for arm manipulation and for leg locomotion, respectively. The former is defined as the end-effector position and orientation with respect to the legged main body, and the latter as the main-body position and orientation in the world coordinates. Three approaches are proposed for the task decomposition. The approaches are further evaluated in terms of energy consumption, from which an optimal approach can be selected.

  6. Harmony search algorithm for solving combined heat and power economic dispatch problems

    Energy Technology Data Exchange (ETDEWEB)

    Khorram, Esmaile, E-mail: eskhor@aut.ac.i [Department of Applied Mathematics, Faculty of Mathematics and Computer Science, Amirkabir University of Technology, No. 424, Hafez Ave., 15914 Tehran (Iran, Islamic Republic of); Jaberipour, Majid, E-mail: Majid.Jaberipour@gmail.co [Department of Applied Mathematics, Faculty of Mathematics and Computer Science, Amirkabir University of Technology, No. 424, Hafez Ave., 15914 Tehran (Iran, Islamic Republic of)

    2011-02-15

    Economic dispatch (ED) is one of the key optimization problems in electric power system operation. The problem grows complex if one or more units produce both power and heat. Combined heat and power economic dispatch (CHPED) problem is a complicated problem that needs powerful methods to solve. This paper presents a harmony search (EDHS) algorithm to solve CHPED. Some standard examples are presented to demonstrate the effectiveness of this algorithm in obtaining the optimal solution. In all cases, the solutions obtained using EDHS algorithm are better than those obtained by other methods.

  7. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  8. Complex multidisciplinary systems decomposition for aerospace vehicle conceptual design and technology acquisition

    Science.gov (United States)

    Omoragbon, Amen

    Although, the Aerospace and Defense (A&D) industry is a significant contributor to the United States' economy, national prestige and national security, it experiences significant cost and schedule overruns. This problem is related to the differences between technology acquisition assessments and aerospace vehicle conceptual design. Acquisition assessments evaluate broad sets of alternatives with mostly qualitative techniques, while conceptual design tools evaluate narrow set of alternatives with multidisciplinary tools. In order for these two fields to communicate effectively, a common platform for both concerns is desired. This research is an original contribution to a three-part solution to this problem. It discusses the decomposition step of an innovation technology and sizing tool generation framework. It identifies complex multidisciplinary system definitions as a bridge between acquisition and conceptual design. It establishes complex multidisciplinary building blocks that can be used to build synthesis systems as well as technology portfolios. It also describes a Graphical User Interface Designed to aid in decomposition process. Finally, it demonstrates an application of the methodology to a relevant acquisition and conceptual design problem posed by the US Air Force.

  9. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  10. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  11. Non-linear analytic and coanalytic problems (Lp-theory, Clifford analysis, examples)

    International Nuclear Information System (INIS)

    Dubinskii, Yu A; Osipenko, A S

    2000-01-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the 'orthogonal' sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented

  12. Simplified approaches to some nonoverlapping domain decomposition methods

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  13. Comparison differential transformation technique with Adomian decomposition method for linear and nonlinear initial value problems

    International Nuclear Information System (INIS)

    Abdel-Halim Hassan, I.H.

    2008-01-01

    In this paper, we will compare the differential transformation method DTM and Adomian decomposition method ADM to solve partial differential equations (PDEs). The definition and operations of differential transform method was introduced by Zhou [Zhou JK. Differential transformation and its application for electrical circuits. Wuuhahn, China: Huarjung University Press; 1986 [in Chinese

  14. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  15. Can visible light impact litter decomposition under pollution of ZnO nanoparticles?

    Science.gov (United States)

    Du, Jingjing; Zhang, Yuyan; Liu, Lina; Qv, Mingxiang; Lv, Yanna; Yin, Yifei; Zhou, Yinfei; Cui, Minghui; Zhu, Yanfeng; Zhang, Hongzhong

    2017-11-01

    ZnO nanoparticles is one of the most used materials in a wide range including antibacterial coating, electronic device, and personal care products. With the development of nanotechnology, ecotoxicology of ZnO nanoparticles has been received increasing attention. To assess the phototoxicity of ZnO nanoparticles in aquatic ecosystem, microcosm experiments were conducted on Populus nigra L. leaf litter decomposition under combined effect of ZnO nanoparticles and visible light radiation. Litter decomposition rate, pH value, extracellular enzyme activity, as well as the relative contributions of fungal community to litter decomposition were studied. Results showed that long-term exposure to ZnO nanoparticles and visible light led to a significant decrease in litter decomposition rate (0.26 m -1 vs 0.45 m -1 ), and visible light would increase the inhibitory effect (0.24 m -1 ), which caused significant decrease in pH value of litter cultures, fungal sporulation rate, as well as most extracellular enzyme activities. The phototoxicity of ZnO nanoparticles also showed impacts on fungal community composition, especially on the genus of Varicosporium, whose abundance was significantly and positively related to decomposition rate. In conclusion, our study provides the evidence for negatively effects of ZnO NPs photocatalysis on ecological process of litter decomposition and highlights the contribution of visible light radiation to nanoparticles toxicity in freshwater ecosystems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Vegetation exerts a greater control on litter decomposition than climate warming in peatlands.

    Science.gov (United States)

    Ward, Susan E; Orwin, Kate H; Ostle, Nicholas J; Briones, J I; Thomson, Bruce C; Griffiths, Robert I; Oakley, Simon; Quirk, Helen; Bardget, Richard D

    2015-01-01

    Historically, slow decomposition rates have resulted in the accumulation of large amounts of carbon in northern peatlands. Both climate warming and vegetation change can alter rates of decomposition, and hence affect rates of atmospheric CO2 exchange, with consequences for climate change feedbacks. Although warming and vegetation change are happening concurrently, little is known about their relative and interactive effects on decomposition processes. To test the effects of warming and vegetation change on decomposition rates, we placed litter of three dominant species (Calluna vulgaris, Eriophorum vaginatum, Hypnum jutlandicum) into a peatland field experiment that combined warming.with plant functional group removals, and measured mass loss over two years. To identify potential mechanisms behind effects, we also measured nutrient cycling and soil biota. We found that plant functional group removals exerted a stronger control over short-term litter decomposition than did approximately 1 degrees C warming, and that the plant removal effect depended on litter species identity. Specifically, rates of litter decomposition were faster when shrubs were removed from the plant community, and these effects were strongest for graminoid and bryophyte litter. Plant functional group removals also had strong effects on soil biota and nutrient cycling associated with decomposition, whereby shrub removal had cascading effects on soil fungal community composition, increased enchytraeid abundance, and increased rates of N mineralization. Our findings demonstrate that, in addition to litter quality, changes in vegetation composition play a significant role in regulating short-term litter decomposition and belowground communities in peatland, and that these impacts can be greater than moderate warming effects. Our findings, albeit from a relatively short-term study, highlight the need to consider both vegetation change and its impacts below ground alongside climatic effects when

  17. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    Science.gov (United States)

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  18. Understanding Catalytic Activity Trends for NO Decomposition and CO Oxidation using Density Functional Theory and Microkinetic Modeling

    DEFF Research Database (Denmark)

    Falsig, Hanne

    -metal surfaces by combining a database of adsorption energies on stepped metal surfaces with known Brønsted–Evans–Polanyi (BEP) relations for the activation barriers of dissociation of diatomic molecules over stepped transition- and noble-metal surfaces. The potential energy diagram directly points to why Pd......The main aim of this thesis is to understand the catalytic activity of transition metals and noble metals for the direct decomposition of NO and the oxidation of CO. The formation of NOx from combustion of fossil and renewable fuels continues to be a dominant environmental issue. We take one step...... towards rationalizing trends in catalytic activity of transition metal catalysts for NO decomposition by combining microkinetic modelling with density functional theory calculations. We establish the full potential energy diagram for the direct NO decomposition reaction over stepped transition...

  19. Domain decomposition method using a hybrid parallelism and a low-order acceleration for solving the Sn transport equation on unstructured geometry

    International Nuclear Information System (INIS)

    Odry, Nans

    2016-01-01

    Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the

  20. Player Skill Decomposition in Multiplayer Online Battle Arenas

    OpenAIRE

    Chen, Zhengxing; Sun, Yizhou; El-nasr, Magy Seif; Nguyen, Truong-Huy D.

    2017-01-01

    Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player...

  1. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  2. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  3. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  4. Fast matrix factorization algorithm for DOSY based on the eigenvalue decomposition and the difference approximation focusing on the size of observed matrix

    International Nuclear Information System (INIS)

    Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki

    2017-01-01

    This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)

  5. Modeling of ferric sulfate decomposition and sulfation of potassium chloride during grate‐firing of biomass

    DEFF Research Database (Denmark)

    Wu, Hao; Jespersen, Jacob Boll; Jappe Frandsen, Flemming

    2013-01-01

    Ferric sulfate is used as an additive in biomass combustion to convert the released potassium chloride to the less harmful potassium sulfate. The decomposition of ferric sulfate is studied in a fast heating rate thermogravimetric analyzer and a volumetric reaction model is proposed to describe...... the process. The yields of sulfur oxides from ferric sulfate decomposition under boiler conditions are investigated experimentally, revealing a distribution of approximately 40% SO3 and 60% SO2. The ferric sulfate decomposition model is combined with a detailed kinetic model of gas‐phase KCl sulfation...... and a model of K2SO4 condensation to simulate the sulfation of KCl by ferric sulfate addition. The simulation results show good agreements with experiments conducted in a biomass grate‐firing reactor. The results indicate that the SO3 released from ferric sulfate decomposition is the main contributor to KCl...

  6. Identification of Combined Power Quality Disturbances Using Singular Value Decomposition (SVD and Total Least Squares-Estimation of Signal Parameters via Rotational Invariance Techniques (TLS-ESPRIT

    Directory of Open Access Journals (Sweden)

    Huaishuo Xiao

    2017-11-01

    Full Text Available In order to identify various kinds of combined power quality disturbances, the singular value decomposition (SVD and the improved total least squares-estimation of signal parameters via rotational invariance techniques (TLS-ESPRIT are combined as the basis of disturbance identification in this paper. SVD is applied to identify the catastrophe points of disturbance intervals, based on which the disturbance intervals are segmented. Then the improved TLS-ESPRIT optimized by singular value norm method is used to analyze each data segment, and extract the amplitude, frequency, attenuation coefficient and initial phase of various kinds of disturbances. Multi-group combined disturbance test signals are constructed by MATLAB and the proposed method is also tested by the measured data of IEEE Power and Energy Society (PES Database. The test results show that the new method proposed has a relatively higher accuracy than conventional TLS-ESPRIT, which could be used in the identification of measured data.

  7. Computational Study on a PTAS for Planar Dominating Set Problem

    Directory of Open Access Journals (Sweden)

    Qian-Ping Gu

    2013-01-01

    Full Text Available The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn time, c is a constant, (1+1/k-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k. Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k in O(22ckn time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.

  8. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  9. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    Science.gov (United States)

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  10. Simon on problem solving

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    2006-01-01

    as a general approach to problem solving. We apply these Simonian ideas to organisational issues, specifically new organisational forms. Specifically, Simonian ideas allow us to develop a morphology of new organisational forms and to point to some design problems that characterise these forms.......Two of Herbert Simon's best-known papers are 'The Architecture of Complexity' and 'The Structure of Ill-Structured Problems.' We discuss the neglected links between these two papers, highlighting the role of decomposition in the context of problems on which constraints have been imposed...

  11. The Solution of Two-Phase Inverse Stefan Problem Based on a Hybrid Method with Optimization

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2015-01-01

    Full Text Available The two-phase Stefan problem is widely used in industrial field. This paper focuses on solving the two-phase inverse Stefan problem when the interface moving is unknown, which is more realistic from the practical point of view. With the help of optimization method, the paper presents a hybrid method which combines the homotopy perturbation method with the improved Adomian decomposition method to solve this problem. Simulation experiment demonstrates the validity of this method. Optimization method plays a very important role in this paper, so we propose a modified spectral DY conjugate gradient method. And the convergence of this method is given. Simulation experiment illustrates the effectiveness of this modified spectral DY conjugate gradient method.

  12. Parallel performance of the angular versus spatial domain decomposition for discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Fischer, J.W.; Azmy, Y.Y.

    2003-01-01

    A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of

  13. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  14. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    Science.gov (United States)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  15. Methanol Oxidation on Pt3Sn(111) for Direct Methanol Fuel Cells: Methanol Decomposition.

    Science.gov (United States)

    Lu, Xiaoqing; Deng, Zhigang; Guo, Chen; Wang, Weili; Wei, Shuxian; Ng, Siu-Pang; Chen, Xiangfeng; Ding, Ning; Guo, Wenyue; Wu, Chi-Man Lawrence

    2016-05-18

    PtSn alloy, which is a potential material for use in direct methanol fuel cells, can efficiently promote methanol oxidation and alleviate the CO poisoning problem. Herein, methanol decomposition on Pt3Sn(111) was systematically investigated using periodic density functional theory and microkinetic modeling. The geometries and energies of all of the involved species were analyzed, and the decomposition network was mapped out to elaborate the reaction mechanisms. Our results indicated that methanol and formaldehyde were weakly adsorbed, and the other derivatives (CHxOHy, x = 1-3, y = 0-1) were strongly adsorbed and preferred decomposition rather than desorption on Pt3Sn(111). The competitive methanol decomposition started with the initial O-H bond scission followed by successive C-H bond scissions, (i.e., CH3OH → CH3O → CH2O → CHO → CO). The Brønsted-Evans-Polanyi relations and energy barrier decomposition analyses identified the C-H and O-H bond scissions as being more competitive than the C-O bond scission. Microkinetic modeling confirmed that the vast majority of the intermediates and products from methanol decomposition would escape from the Pt3Sn(111) surface at a relatively low temperature, and the coverage of the CO residue decreased with an increase in the temperature and decrease in partial methanol pressure.

  16. Sparse time-frequency decomposition based on dictionary adaptation.

    Science.gov (United States)

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  17. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  18. Influence of dislocation glide on the spinodal decomposition of fatigued duplex stainless steels

    Energy Technology Data Exchange (ETDEWEB)

    Herenu, S., E-mail: herenu@ifir-conicet.gov.ar [Instituto de Fisica Rosario, Bv. 27 de Febrero 210 bis, (2000) Rosario, Santa Fe (Argentina); Sennour, M. [MINES ParisTech, Centre des Materiaux - UMR CNRS 7633 - 91003, Evry Cedex (France); Balbi, M.; Alvarez-Armas, I. [Instituto de Fisica Rosario, Bv. 27 de Febrero 210 bis, (2000) Rosario, Santa Fe (Argentina); Thorel, A. [MINES ParisTech, Centre des Materiaux - UMR CNRS 7633 - 91003, Evry Cedex (France); Armas, A.F. [Instituto de Fisica Rosario, Bv. 27 de Febrero 210 bis, (2000) Rosario, Santa Fe (Argentina)

    2011-09-25

    Highlights: {center_dot} Dislocations bands and microbands are developed in {alpha} phase of fatigued aged DSS. {center_dot} Inside these structures, demodulation of spinodal decomposition (SD) were found. {center_dot} This fact could take part in the cyclic softening displayed by DSS S32750. {center_dot} Cyclic tests at 475 deg. C show a saturation stage at the end of fatigue life. {center_dot} This could be explained by the effect of demodulation and creation of SD. - Abstract: The present work is focused on assessing the influence of dislocation movement on spinodal decomposition through scanning transmission electron microscopy (STEM) in combination with energy dispersive X-ray spectroscopy (EDS) analysis in aged duplex stainless steel (DSS) S32750. Dislocation bands and microbands are the prominent dislocation arrangements observed in fatigue tested aged samples. By EDS measurements it was found that the spinodal decomposition was dissolved inside these dislocations structures. Therefore, the mechanism of microband formation developed in the ferritic phase during cycling seems to be responsible for the demodulation of the spinodal decomposition and cyclic softening of the aged DSS.

  19. Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition

    Directory of Open Access Journals (Sweden)

    Cécile Germain‐Renaud

    1999-01-01

    Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.

  20. Decomposition dynamic of two aquatic macrophytes Trapa bispinosa Roxb. and Nelumbo nucifera detritus.

    Science.gov (United States)

    Zhou, Xiaohong; Feng, Deyou; Wen, Chunzi; Liu, Dan

    2018-03-29

    In freshwater ecosystems, aquatic macrophytes play significant roles in nutrient cycling. One problem in this process is nutrient loss in the tissues of untimely harvested plants. In this study, we used two aquatic species, Nelumbo nucifera and Trapa bispinosa Roxb., to investigate the decomposition dynamics and nutrient release from detritus. Litter bags containing 10 g of stems (plus petioles) and leaves for each species detritus were incubated in the pond from November 2016 to May 2017. Nine times litterbags were retrieved on days 6, 14, 25, 45, 65, 90, 125, 145, and 165 after the decomposition experiment for the monitoring of biomass loss and nutrient release. The results suggested that the dry masses of N. nucifera and T. bispinosa decomposed by 49.35-69.40 and 82.65-91.65%, respectively. The order of decomposition rate constants (k) is as follows: leaves of T. bispinosa (0.0122 day -1 ) > stems (plus petioles) of T. bispinosa (0.0090 day -1 ) > leaves of N. nucifera (0.0060 day -1 ) > stems (plus petioles) of N. nucifera (0.0030 day -1 ). Additionally, the orders of time for 50% dry mass decay, time for 95% dry mass decay, and turnover rate are as follows: leaves  0.05). In addition, the decomposition time had also significant effects on the detritus decomposition dynamic and nutrient release. However, the contributors of species and decomposition time on detritus decomposition were significantly different on the basis of their F values of two-way ANOVA results. This study can provide scientific bases for the aquatic plant scientific management in freshwater ecosystems of the East region of China.

  1. Europlexus: a domain decomposition method in explicit dynamics

    International Nuclear Information System (INIS)

    Faucher, V.; Hariddh, Bung; Combescure, A.

    2003-01-01

    Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)

  2. Detection of Copper (II) and Cadmium (II) binding to dissolved organic matter from macrophyte decomposition by fluorescence excitation-emission matrix spectra combined with parallel factor analysis

    International Nuclear Information System (INIS)

    Yuan, Dong-hai; Guo, Xu-jing; Wen, Li; He, Lian-sheng; Wang, Jing-gang; Li, Jun-qi

    2015-01-01

    Fluorescence excitation-emission matrix (EEM) spectra coupled with parallel factor analysis (PARAFAC) was used to characterize dissolved organic matter (DOM) derived from macrophyte decomposition, and to study its complexation with Cu (II) and Cd (II). Both the protein-like and the humic-like components showed a marked quenching effect by Cu (II). Negligible quenching effects were found for Cd (II) by components 1, 5 and 6. The stability constants and the fraction of the binding fluorophores for humic-like components and Cu (II) can be influenced by macrophyte decomposition of various weight gradients in aquatic plants. Macrophyte decomposition within the scope of the appropriate aquatic phytomass can maximize the stability constant of DOM-metal complexes. A large amount of organic matter was introduced into the aquatic environment by macrophyte decomposition, suggesting that the potential risk of DOM as a carrier of heavy metal contamination in macrophytic lakes should not be ignored. - Highlights: • Macrophyte decomposition increases fluorescent DOM components in the upper sediment. • Protein-like components are quenched or enhanced by adding Cu (II) and Cd (II). • Macrophyte decomposition DOM can impact the affinity of Cu (II) and Cd (II). • The log K M and f values showed a marked change due to macrophyte decomposition. • Macrophyte decomposition can maximize the stability constant of DOM-Cu (II) complexes. - Macrophyte decomposition DOM can influence on the binding affinity of metal ions in macrophytic lakes

  3. Art of spin decomposition

    International Nuclear Information System (INIS)

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-01-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  4. Combination of graph heuristics in producing initial solution of curriculum based course timetabling problem

    Science.gov (United States)

    Wahid, Juliana; Hussin, Naimah Mohd

    2016-08-01

    The construction of population of initial solution is a crucial task in population-based metaheuristic approach for solving curriculum-based university course timetabling problem because it can affect the convergence speed and also the quality of the final solution. This paper presents an exploration on combination of graph heuristics in construction approach in curriculum based course timetabling problem to produce a population of initial solutions. The graph heuristics were set as single and combination of two heuristics. In addition, several ways of assigning courses into room and timeslot are implemented. All settings of heuristics are then tested on the same curriculum based course timetabling problem instances and are compared with each other in terms of number of population produced. The result shows that combination of saturation degree followed by largest degree heuristic produce the highest number of population of initial solutions. The results from this study can be used in the improvement phase of algorithm that uses population of initial solutions.

  5. Growth and decomposition of Lithium and Lithium hydride on Nickel

    DEFF Research Database (Denmark)

    Engbæk, Jakob; Nielsen, Gunver; Nielsen, Jane Hvolbæk

    2006-01-01

    In this paper we have investigated the deposition, structure and decomposition of lithium and lithium-hydride films on a nickel substrate. Using surface sensitive techniques it was possible to quantify the deposited Li amount, and to optimize the deposition procedure for synthesizing lithium......-hydride films. By only making thin films of LiH it is possible to study the stability of these hydride layers and compare it directly with the stability of pure Li without having any transport phenomena or adsorbed oxygen to obscure the results. The desorption of metallic lithium takes place at a lower...... temperature than the decomposition of the lithium-hydride, confirming the high stability and sintering problems of lithium-hydride making the storage potential a challenge. (c) 2006 Elsevier B.V. All rights reserved....

  6. Simon on Problem-Solving

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    as a general approach to problem solving. We apply these Simonian ideas to organizational issues, specifically new organizational forms. Specifically, Simonian ideas allow us to develop a morphology of new organizational forms and to point to some design problems that characterize these forms.Keywords: Herbert...... Simon, problem-solving, new organizational forms. JEL Code: D23, D83......Two of Herbert Simon's best-known papers are "The Architecture of Complexity" and "The Structure of Ill-Structured Problems." We discuss the neglected links between these two papers, highlighting the role of decomposition in the context of problems on which constraints have been imposed...

  7. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  8. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  9. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  10. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  11. Chlorinated aliphatic and aromatic VOC decomposition in air mixture by using electron beam irradiation

    International Nuclear Information System (INIS)

    Chmielewski, A.G.; Sun Yongxia; Bulka, S.; Zimek, Z.

    2004-01-01

    Chlorinated aliphatic and aromatic hydrocarbons, which are emitted from coal power station and waste incinerators, are very harmful to the environment and human health. Recent studies show that chlorinated aliphatic and aromatic hydrocarbons are suspected to be the precursors of dioxin's formation. Dioxin's emission into atmosphere will cause severe environmental problems by ecology contamination. l,4-dichlorobenzene(l,4-DCB) and cis-dichloroethylene(cis-DCE) were chosen as representative chlorinated aromatic and aliphatic compounds, respectively. Their decomposition was investigated by electron beam irradiation. The experiments were carried out 'in batch' system. It is found that over 97% cis-DCE is decomposed having an initial concentration of 661 ppm. G-values of cis-DCE decomposition vary from 10 to 28 (molecules/100 eV) for initial concentration of 270-1530 ppm cis-DCE. The decomposition is mainly caused by secondary electron attachment and Cl addition reactions. Comparing with cis-DCE, 1,4-DCB decomposition needs higher absorbed dose. G-value of 1,4-DCB is below 4 molecules/100 eV

  12. Visual Design of User Interfaces by (De)composition

    OpenAIRE

    Lepreux, Sophie; Michotte, Benjamin; Vanderdonckt, Jean; 13th Int. Workshop on Design, Specification, and Verification of Interactive Systems DSV-IS

    2006-01-01

    Most existing graphical user interfaces are usually designed for a fixed context of use, thus making them rather difficult to modify for other contexts of use, such as for other users, other platforms, and other environments. This paper addresses this problem by introducing a new visual design method for graphical users interfaces referred to as “visual design by (de)composition". In this method, any individual or composite component of a graphical user interface is submitted to a series of o...

  13. Decomposition of tetrachloroethylene by ionizing radiation

    International Nuclear Information System (INIS)

    Hakoda, T.; Hirota, K.; Hashimoto, S.

    1998-01-01

    Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation

  14. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  15. Thermal decomposition of γ-irradiated lead nitrate

    International Nuclear Information System (INIS)

    Nair, S.M.K.; Kumar, T.S.S.

    1990-01-01

    The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs

  16. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  17. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  18. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  19. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  20. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    Science.gov (United States)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  1. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  2. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  3. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  4. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  5. Domain decomposition methods for flows in faulted porous media; Methodes de decomposition de domaine pour les ecoulements en milieux poreux failles

    Energy Technology Data Exchange (ETDEWEB)

    Flauraud, E.

    2004-05-01

    In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)

  6. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  7. Module theory endomorphism rings and direct sum decompositions in some classes of modules

    CERN Document Server

    Facchini, Alberto

    1998-01-01

    The purpose of this expository monograph is three-fold. First, the solution of a problem posed by Wolfgang Krull in 1932 is presented. He asked whether what is now called the "Krull-Schmidt Theorem" holds for artinian modules. A negative answer was published only in 1995 by Facchini, Herbera, Levy and Vámos. Second, the answer to a question posed by Warfield in 1975, namely, whether the Krull-Schmidt-Theorem holds for serial modules, is described. Facchini published a negative answer in 1996. The solution to the Warfield problem shows an interesting behavior; in fact, it is a phenomena so rare in the history of Krull-Schmidt type theorems that its presentation to a wider mathematical audience provides the third incentive for this monograph. Briefly, the Krull-Schmidt-Theorem holds for some, not all, classes of modules. When it does hold, any two indecomposable decompositions are uniquely determined up to one permutation. For serial modules the theorem does not hold, but any two indecomposable decompositions ...

  8. Calculation and decomposition of spot price using interior point nonlinear optimisation methods

    International Nuclear Information System (INIS)

    Xie, K.; Song, Y.H.

    2004-01-01

    Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)

  9. Increased rainfall variability and N addition accelerate litter decomposition in a restored prairie.

    Science.gov (United States)

    Schuster, Michael J

    2016-03-01

    Anthropogenic nitrogen deposition and projected increases in rainfall variability (the frequency of drought and heavy rainfall events) are expected to strongly influence ecosystem processes such as litter decomposition. However, how these two global change factors interact to influence litter decomposition is largely unknown. I examined how increased rainfall variability and nitrogen addition affected mass and nitrogen loss of litter from two tallgrass prairie species, Schizachyrium scoparium and Solidago canadensis, and isolated the effects of each during plant growth and during litter decomposition. I increased rainfall variability by consolidating ambient rainfall into larger events and simulated chronic nitrogen deposition using a slow-release urea fertilizer. S. scoparium litter decay was more strongly regulated by the treatments applied during plant growth than by those applied during decomposition. During plant growth, increased rainfall variability resulted in S. scoparium litter that subsequently decomposed more slowly and immobilized more nitrogen than litter grown under ambient conditions, whereas nitrogen addition during plant growth accelerated subsequent mass loss of S. scoparium litter. In contrast, S. canadensis litter mass and N losses were enhanced under either N addition or increased rainfall variability both during plant growth and during decomposition. These results suggest that ongoing changes in rainfall variability and nitrogen availability are accelerating nutrient cycling in tallgrass prairies through their combined effects on litter quality, environmental conditions, and plant community composition.

  10. Programming Enhancements for Low Temperature Thermal Decomposition Workstation

    International Nuclear Information System (INIS)

    Igou, R.E.

    1998-01-01

    This report describes a new control-and-measurement system design for the Oak Ridge Y-12 Plant's Low Temperature Thermal Decomposition (LTTD) process. The new design addresses problems with system reliability stemming from equipment obsolescence and addresses specific functional improvements that plant production personnel have identified, as required. The new design will also support new measurement techniques, which the Y-12 Development Division has identified for future operations. The new techniques will function in concert with the original technique so that process data consistency is maintained

  11. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  12. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  13. A Combined group EA-PROMETHEE method for a supplier selection problem

    Directory of Open Access Journals (Sweden)

    Hamid Reza Rezaee Kelidbari

    2016-07-01

    Full Text Available One of the important decisions which impacts all firms’ activities is the supplier selection problem. Since the 1950s, several works have addressed this problem by treating different aspects and instances. In this paper, a combined multiple criteria decision making (MCDM technique (EA-PROMETHEE has been applied to implement a proper decision making. To this aim, after reviewing the theoretical background regarding to supplier selection, the extension analysis (EA is used to determine the importance of criteria and PROMETHEE for appraisal of suppliers based on the criteria. An empirical example illustrated the proposed approach.

  14. Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2014-01-01

    Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.

  15. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  16. Enhanced decomposition of stable soil organic carbon and microbial catabolic potentials by long-term field warming.

    Science.gov (United States)

    Feng, Wenting; Liang, Junyi; Hale, Lauren E; Jung, Chang Gyo; Chen, Ji; Zhou, Jizhong; Xu, Minggang; Yuan, Mengting; Wu, Liyou; Bracho, Rosvel; Pegoraro, Elaine; Schuur, Edward A G; Luo, Yiqi

    2017-11-01

    Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon-climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming. Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the

  17. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  18. Hydrogen production by photoelectrolytic decomposition of H2O using solar energy

    Science.gov (United States)

    Rauh, R. D.; Alkaitis, S. A.; Buzby, J. M.; Schiff, R.

    1980-01-01

    Photoelectrochemical systems for the efficient decomposition of water are discussed. Semiconducting d band oxides which would yield the combination of stability, low electron affinity, and moderate band gap essential for an efficient photoanode are sought. The materials PdO and Fe-xRhxO3 appear most likely. Oxygen evolution yields may also be improved by mediation of high energy oxidizing agents, such as CO3(-). Examination of several p type semiconductors as photocathodes revealed remarkable stability for p-GaAs, and also indicated p-CdTe as a stable H2 photoelectrode. Several potentially economical schemes for photoelectrochemical decomposition of water were examined, including photoelectrochemical diodes and two stage, four photon processes.

  19. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.

    Science.gov (United States)

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2017-04-22

    Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address

  20. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...

  1. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    Science.gov (United States)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  2. A parabolic velocity-decomposition method for wind turbines

    Science.gov (United States)

    Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.

    2017-02-01

    An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.

  3. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  4. A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.

    Science.gov (United States)

    Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K

    2018-04-21

    Many clinical applications depend critically on the accurate differentiation and classi-fication of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT METHODS: We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configu-ration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images exam-ining realistic configurations for both dual- and triple-energy CT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 mg/mL and 1 mg/mL, respectively. TECT outperforms DECT for multi-contrast CT imag-ing and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic

  5. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition.

    Science.gov (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng

    2017-10-02

    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  6. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  7. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  8. Programming Enhancements for Low Temperature Thermal Decomposition Workstation

    Energy Technology Data Exchange (ETDEWEB)

    Igou, R.E.

    1998-10-01

    This report describes a new control-and-measurement system design for the Oak Ridge Y-12 Plant's Low Temperature Thermal Decomposition (LTTD) process. The new design addresses problems with system reliability stemming from equipment obsolescence and addresses specific functional improvements that plant production personnel have identified, as required. The new design will also support new measurement techniques, which the Y-12 Development Division has identified for future operations. The new techniques will function in concert with the original technique so that process data consistency is maintained.

  9. Active sites and mechanisms for H2O2 decomposition over Pd catalysts

    Science.gov (United States)

    Plauck, Anthony; Stangland, Eric E.; Dumesic, James A.; Mavrikakis, Manos

    2016-01-01

    A combination of periodic, self-consistent density functional theory (DFT-GGA-PW91) calculations, reaction kinetics experiments on a SiO2-supported Pd catalyst, and mean-field microkinetic modeling are used to probe key aspects of H2O2 decomposition on Pd in the absence of cofeeding H2. We conclude that both Pd(111) and OH-partially covered Pd(100) surfaces represent the nature of the active site for H2O2 decomposition on the supported Pd catalyst reasonably well. Furthermore, all reaction flux in the closed catalytic cycle is predicted to flow through an O–O bond scission step in either H2O2 or OOH, followed by rapid H-transfer steps to produce the H2O and O2 products. The barrier for O–O bond scission is sensitive to Pd surface structure and is concluded to be the central parameter governing H2O2 decomposition activity. PMID:27006504

  10. Combining fuzzy mathematics with fuzzy logic to solve business management problems

    Science.gov (United States)

    Vrba, Joseph A.

    1993-12-01

    Fuzzy logic technology has been applied to control problems with great success. Because of this, many observers fell that fuzzy logic is applicable only in the control arena. However, business management problems almost never deal with crisp values. Fuzzy systems technology--a combination of fuzzy logic, fuzzy mathematics and a graphical user interface--is a natural fit for developing software to assist in typical business activities such as planning, modeling and estimating. This presentation discusses how fuzzy logic systems can be extended through the application of fuzzy mathematics and the use of a graphical user interface to make the information contained in fuzzy numbers accessible to business managers. As demonstrated through examples from actual deployed systems, this fuzzy systems technology has been employed successfully to provide solutions to the complex real-world problems found in the business environment.

  11. Regularized generalized eigen-decomposition with applications to sparse supervised feature extraction and sparse discriminant analysis

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2015-01-01

    We propose a general technique for obtaining sparse solutions to generalized eigenvalue problems, and call it Regularized Generalized Eigen-Decomposition (RGED). For decades, Fisher's discriminant criterion has been applied in supervised feature extraction and discriminant analysis, and it is for...

  12. Adequacy assessment of mathematical models in the dynamics of litter decomposition in a tropical forest Mosaic Atlantic, in southeastern Brazil

    Directory of Open Access Journals (Sweden)

    FP. Nunes

    Full Text Available The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20×20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 – Olson Exponential Model (1963, which considers the constant K, 2 – Model proposed by Fountain and Schowalter (2004, 3 – Model proposed by Coelho and Borges (2005, which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004 model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p> 0.05 between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2. However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the

  13. Adequacy assessment of mathematical models in the dynamics of litter decomposition in a tropical forest Mosaic Atlantic, in southeastern Brazil.

    Science.gov (United States)

    Nunes, F P; Garcia, Q S

    2015-05-01

    The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition

  14. DOLOMITE THERMAL-DECOMPOSITION MACROKINETIC MODELS FOR EVALUATION OF THE GASGENERATORS SORBENT SYSTEMS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Employing dolomite in the capacity of a sorbent for generator gas purification is of considerable interest nowadays, as it is the impurity of generator gas that causes the major problem for creating cheep and effective co-generator plants. Designing gas purification systems employs simple but physically adequate macrokinetic models of dolomite thermal decomposition.  The  paper  analyzes  peculiarities  of  several  contemporaneous  models  of  dolomite and calcite thermal decomposition and infers on reasonable practicality for creating compact engineering dolomite-decomposition macrokinetic models and universal techniques of these models parameter reconstruction for specific dolomite samples. Such technics can be founded on thermogravimetric data and standard approximation error minimizing algorithms.The author assumes that CO2  evacuation from the reaction zone within the particle may proceed by diffusion mechanism and/or by the Darcy filtration and indicates that functional dependence of the thermal-decomposition rate from the particle sizes and the temperature differs for the specified mechanisms. The paper formulates four macrokinetic models whose correspondence verification is grounded on the experimental data. The author concludes that further work in this direction should proceed with the dolomite samples investigation and selecting the best approximation model describing experimental data in wide range of temperatures, warming up rates and the particle sizes.

  15. FY 1998 annual report on the decomposition/removal of harmful compounds in the gaseous phase by porous membrane provided with a catalytic function; 1998 nendo shokubai kinotsuki fuyo takomaku ni yoru kisochu yugai busshitsu no bunkai jokyo chosa hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Harmful compounds, e.g., dioxins and nitrogen oxides, released into the air are causing severer environmental problems on a global scale. In order to solve these problems, it is necessary to efficiently remove the released compounds in the vicinity of the living environments, while preventing, as far as possible, their formation at the sources. An attempt has been made to develop porous membranes impregnated with composites of a variety of metallic oxides showing activities as photocatalysts and for dark reactions by the ion engineering method, in order to drastically solve the above problems. Described herein are the FY 1998 results. Thin films of various titanium oxide crystals (anatase, rutile, and their combinations) are formed on Si substrates by the ion engineering method, as the photocatalysts for decomposition of aldehyde and water (for hydrogen production), to validate the optimum crystalline structures for the photocatalysis. Porous bodies of Ni and carbon are also impregnated with anatase TiO{sub 2} for decomposition of harmful gaseous compounds and water, to validate the effects of the porous membranes provided with catalytic functions. (NEDO)

  16. ATC calculation with steady-state security constraints using Benders decomposition

    International Nuclear Information System (INIS)

    Shaaban, M.; Yan, Z.; Ni, Y.; Wu, F.; Li, W.; Liu, H.

    2003-01-01

    Available transfer capability (ATC) is an important indicator of the usable amount of transmission capacity accessible by assorted parties for commercial trading, ATC calculation is nontrivial when steady-state security constraints are included. In hie paper, Benders decomposition method is proposed to partition the AC problem with steady-state security constraints into a base case master problem and a series of subproblems relevant to various contingencies to include their impacts on ATC. The mathematical model is formulated and the two solution schemes are presented. Computer testing on the 4-bus system and IEEE 30-bus system shows the effectiveness of the proposed method and the solution schemes. (Author)

  17. Three-dimensional decomposition models for carbon productivity

    International Nuclear Information System (INIS)

    Meng, Ming; Niu, Dongxiao

    2012-01-01

    This paper presents decomposition models for the change in carbon productivity, which is considered a key indicator that reflects the contributions to the control of greenhouse gases. Carbon productivity differential was used to indicate the beginning of decomposition. After integrating the differential equation and designing the Log Mean Divisia Index equations, a three-dimensional absolute decomposition model for carbon productivity was derived. Using this model, the absolute change of carbon productivity was decomposed into a summation of the absolute quantitative influences of each industrial sector, for each influence factor (technological innovation and industrial structure adjustment) in each year. Furthermore, the relative decomposition model was built using a similar process. Finally, these models were applied to demonstrate the decomposition process in China. The decomposition results reveal several important conclusions: (a) technological innovation plays a far more important role than industrial structure adjustment; (b) industry and export trade exhibit great influence; (c) assigning the responsibility for CO 2 emission control to local governments, optimizing the structure of exports, and eliminating backward industrial capacity are highly essential to further increase China's carbon productivity. -- Highlights: ► Using the change of carbon productivity to measure a country's contribution. ► Absolute and relative decomposition models for carbon productivity are built. ► The change is decomposed to the quantitative influence of three-dimension. ► Decomposition results can be used for improving a country's carbon productivity.

  18. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  19. An Efficient Combined Meta-Heuristic Algorithm for Solving the Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Majid Yousefikhoshbakht

    2016-08-01

    Full Text Available The traveling salesman problem (TSP is one of the most important NP-hard Problems and probably the most famous and extensively studied problem in the field of combinatorial optimization. In this problem, a salesman is required to visit each of n given nodes once and only once, starting from any node and returning to the original place of departure. This paper presents an efficient evolutionary optimization algorithm developed through combining imperialist competitive algorithm and lin-kernighan algorithm called (MICALK in order to solve the TSP. The MICALK is tested on 44 TSP instances involving from 24 to 1655 nodes from the literature so that 26 best known solutions of the benchmark problem are also found by our algorithm. Furthermore, the performance of MICALK is compared with several metaheuristic algorithms, including GA, BA, IBA, ICA, GSAP, ABO, PSO and BCO on 32 instances from TSPLIB. The results indicate that the MICALK performs well and is quite competitive with the above algorithms.

  20. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  1. Singular Value Decomposition and Ligand Binding Analysis

    Directory of Open Access Journals (Sweden)

    André Luiz Galo

    2013-01-01

    Full Text Available Singular values decomposition (SVD is one of the most important computations in linear algebra because of its vast application for data analysis. It is particularly useful for resolving problems involving least-squares minimization, the determination of matrix rank, and the solution of certain problems involving Euclidean norms. Such problems arise in the spectral analysis of ligand binding to macromolecule. Here, we present a spectral data analysis method using SVD (SVD analysis and nonlinear fitting to determine the binding characteristics of intercalating drugs to DNA. This methodology reduces noise and identifies distinct spectral species similar to traditional principal component analysis as well as fitting nonlinear binding parameters. We applied SVD analysis to investigate the interaction of actinomycin D and daunomycin with native DNA. This methodology does not require prior knowledge of ligand molar extinction coefficients (free and bound, which potentially limits binding analysis. Data are acquired simply by reconstructing the experimental data and by adjusting the product of deconvoluted matrices and the matrix of model coefficients determined by the Scatchard and McGee and von Hippel equation.

  2. Application of hierarchical matrices for partial inverse

    KAUST Repository

    Litvinenko, Alexander

    2013-11-26

    In this work we combine hierarchical matrix techniques (Hackbusch, 1999) and domain decomposition methods to obtain fast and efficient algorithms for the solution of multiscale problems. This combination results in the hierarchical domain decomposition (HDD) method, which can be applied for solution multi-scale problems. Multiscale problems are problems that require the use of different length scales. Using only the finest scale is very expensive, if not impossible, in computational time and memory. Domain decomposition methods decompose the complete problem into smaller systems of equations corresponding to boundary value problems in subdomains. Then fast solvers can be applied to each subdomain. Subproblems in subdomains are independent, much smaller and require less computational resources as the initial problem.

  3. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  4. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    Directory of Open Access Journals (Sweden)

    David Camarena-Martinez

    2014-01-01

    Full Text Available Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE-based frequency estimator and a feed forward neural network (FFNN-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.

  5. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    Science.gov (United States)

    Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281

  6. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  7. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  8. Spectral combination of spherical gravitational curvature boundary-value problems

    Science.gov (United States)

    PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel

    2018-04-01

    Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error

  9. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    Science.gov (United States)

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  10. Unsupervised neural networks for solving Troesch's problem

    International Nuclear Information System (INIS)

    Raja Muhammad Asif Zahoor

    2014-01-01

    In this study, stochastic computational intelligence techniques are presented for the solution of Troesch's boundary value problem. The proposed stochastic solvers use the competency of a feed-forward artificial neural network for mathematical modeling of the problem in an unsupervised manner, whereas the learning of unknown parameters is made with local and global optimization methods as well as their combinations. Genetic algorithm (GA) and pattern search (PS) techniques are used as the global search methods and the interior point method (IPM) is used for an efficient local search. The combination of techniques like GA hybridized with IPM (GA-IPM) and PS hybridized with IPM (PS-IPM) are also applied to solve different forms of the equation. A comparison of the proposed results obtained from GA, PS, IPM, PS-IPM and GA-IPM has been made with the standard solutions including well known analytic techniques of the Adomian decomposition method, the variational iterational method and the homotopy perturbation method. The reliability and effectiveness of the proposed schemes, in term of accuracy and convergence, are evaluated from the results of statistical analysis based on sufficiently large independent runs. (interdisciplinary physics and related areas of science and technology)

  11. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    OpenAIRE

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-01-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential sca...

  12. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    Science.gov (United States)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  13. Decomposition of tetra-alkylammonium thiomolybdates characterised by thermoanalysis and mass spectrometry

    International Nuclear Information System (INIS)

    Poisot, M.; Bensch, W.; Fuentes, S.; Alonso, G.

    2006-01-01

    The decomposition pattern of tetraalkyl-tetrathiomolybdates with general formula (R 4 N) 2 MoS 4 (with R increasing from methyl to heptyl) was determined by means of differential thermal analysis (DTA), thermogravimetric analysis (TGA) and mass spectroscopy (MS) techniques. The complexity of thermal decomposition reactions increases with the size of the R 4 N group. Prior to decomposition at least one phase transition seems to occur in all complexes. The onset of thermal reactions was also a function of the tetra-alkylammonium precursor. All compounds decompose without forming sulfur rich MoS 2+x intermediates. For R = methyl to pentyl precursors the MoS 2 produced was nearly stoichiometric, however for R = hexyl and heptyl the S content was significantly reduced with a Mo:S ratio of about 1.5. The carbon and hydrogen residual contents in the product increased with the number of C atoms in R 4 N; for N contamination no clear trend was obvious. SEM images show that the formation of macro-pores was also a function of the alkyl group in R 4 N. The MoS 2 materials obtained show a sponge-like morphology. Results of DSC experiments in combination with in situ X-ray diffraction also revealed the complex thermal behavior of (R 4 N) 2 MoS 4 materials; reversible and irreversible phase transitions and glass-like transformations were identified in the low temperature range (35-140 deg. C), before the onset of decomposition

  14. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  15. The proper generalized decomposition for advanced numerical simulations a primer

    CERN Document Server

    Chinesta, Francisco; Leygue, Adrien

    2014-01-01

    Many problems in scientific computing are intractable with classical numerical techniques. These fail, for example, in the solution of high-dimensional models due to the exponential increase of the number of degrees of freedom. Recently, the authors of this book and their collaborators have developed a novel technique, called Proper Generalized Decomposition (PGD) that has proven to be a significant step forward. The PGD builds by means of a successive enrichment strategy a numerical approximation of the unknown fields in a separated form. Although first introduced and successfully demonstrated in the context of high-dimensional problems, the PGD allows for a completely new approach for addressing more standard problems in science and engineering. Indeed, many challenging problems can be efficiently cast into a multi-dimensional framework, thus opening entirely new solution strategies in the PGD framework. For instance, the material parameters and boundary conditions appearing in a particular mathematical mod...

  16. Image Watermarking Algorithm Based on Multiobjective Ant Colony Optimization and Singular Value Decomposition in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2013-01-01

    Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.

  17. An investigation on thermal decomposition of DNTF-CMDB propellants

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)

    2007-12-15

    The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  18. CFD SIMULATION FOR DEMILITARIZATION OF RDX IN A ROTARY KILN BY THERMAL DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    SI H. LEE

    2017-06-01

    Full Text Available Demilitarization requires the recovery and disposal of obsolete ammunition and explosives. Since open burning/detonation of hazardous waste has caused serious environmental and safety problems, thermal decomposition has emerged as one of the most feasible methods. RDX is widely used as a military explosive due to its high melting temperature and detonation power. In this work, the feasible conditions under which explosives can be safely incinerated have been investigated via a rotary kiln simulation. To solve this problem, phase change along with the reactions of RDX has been incisively analyzed. A global reaction mechanism consisting of condensed phase and gas phase reactions are used in Computational Fluid Dynamics simulation. User Defined Functions in FLUENT is utilized in this study to inculcate the reactions and phase change into the simulation. The results divulge the effect of temperature and the varying amounts of gas produced in the rotary kiln during the thermal decomposition of RDX. The result leads to the prospect of demilitarizing waste explosives to avoid the possibility of detonation.

  19. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  20. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    Science.gov (United States)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  1. Finding all real roots of a polynomial by matrix algebra and the Adomian decomposition method

    Directory of Open Access Journals (Sweden)

    Hooman Fatoorehchi

    2014-10-01

    Full Text Available In this paper, we put forth a combined method for calculation of all real zeroes of a polynomial equation through the Adomian decomposition method equipped with a number of developed theorems from matrix algebra. These auxiliary theorems are associated with eigenvalues of matrices and enable convergence of the Adomian decomposition method toward different real roots of the target polynomial equation. To further improve the computational speed of our technique, a nonlinear convergence accelerator known as the Shanks transform has optionally been employed. For the sake of illustration, a number of numerical examples are given.

  2. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  3. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  4. Performance of a combined cooling heating and power system with mid-and-low temperature solar thermal energy and methanol decomposition integration

    International Nuclear Information System (INIS)

    Xu, Da; Liu, Qibin; Lei, Jing; Jin, Hongguang

    2015-01-01

    Highlights: • A new middle-and-low temperature solar thermochemical CCHP system is proposed. • The thermodynamic performances of the new system are numerically evaluated. • The superiorities of the new system are demonstrated. - Abstract: In this paper, a new distributed energy system that integrates the mid-and-low temperature solar energy thermochemical process and the methanol decomposition is proposed. Through the solar energy receiver/reactor, the energy collected by a parabolic trough concentrator, at 200–300 °C, is used to drive the decomposition reaction of the methanol into the synthesis gas, and thus the solar thermal energy is converted to the chemical energy. The chemical energy of the synthesis gas released in the combustion chamber of a micro gas turbine is used to drive the combined cooling heating and power systems. Energy analysis and exergy analysis of the system are implemented to evaluate the feasibility of the proposed system. Under the considerations of the changes of the solar irradiation intensity, the off-design performances of the micro turbine and the variations of the load, the design and off-design thermodynamic performances of the system and the characteristics of the chemical energy storage are numerically studied. Numerical results indicate that the primary energy ratio of the system is 76.40%, and the net solar-electricity conversion rate reaches 22.56%, which is higher than exiting large-scale solar thermal power plants. Owing to the introduction of a the solar thermochemical energy storage in the proposed system, the power generation efficiency is insensitive to the variations of the solar radiation, and thus an efficient and stable utilization approach of the solar thermal energy is achieved at all work condition

  5. Exogenous nutrients and carbon resource change the responses of soil organic matter decomposition and nitrogen immobilization to nitrogen deposition

    Science.gov (United States)

    He, Ping; Wan, Song-Ze; Fang, Xiang-Min; Wang, Fang-Chao; Chen, Fu-Sheng

    2016-01-01

    It is unclear whether exogenous nutrients and carbon (C) additions alter substrate immobilization to deposited nitrogen (N) during decomposition. In this study, we used laboratory microcosm experiments and 15N isotope tracer techniques with five different treatments including N addition, N+non-N nutrients addition, N+C addition, N+non-N nutrients+C addition and control, to investigate the coupling effects of non-N nutrients, C addition and N deposition on forest floor decomposition in subtropical China. The results indicated that N deposition inhibited soil organic matter and litter decomposition by 66% and 38%, respectively. Soil immobilized 15N following N addition was lowest among treatments. Litter 15N immobilized following N addition was significantly higher and lower than that of combined treatments during the early and late decomposition stage, respectively. Both soil and litter extractable mineral N were lower in combined treatments than in N addition treatment. Since soil N immobilization and litter N release were respectively enhanced and inhibited with elevated non-N nutrient and C resources, it can be speculated that the N leaching due to N deposition decreases with increasing nutrient and C resources. This study should advance our understanding of how forests responds the elevated N deposition. PMID:27020048

  6. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  7. Probability problems in seismic risk analysis and load combinations for nuclear power plants

    International Nuclear Information System (INIS)

    George, L.L.

    1983-01-01

    This workshop describes some probability problems in power plant reliability and maintenance analysis. The problems are seismic risk analysis, loss of load probability, load combinations, and load sharing. The seismic risk problem is to compute power plant reliability given an earthquake and the resulting risk. Component survival occurs if its peak random response to the earthquake does not exceed its strength. Power plant survival is a complicated Boolean function of component failures and survivals. The responses and strengths of components are dependent random processes, and the peak responses are maxima of random processes. The resulting risk is the expected cost of power plant failure

  8. Influence of different fertilizer supplements on decomposition of cereal stubble remains in chernozem soil

    Science.gov (United States)

    Nikolaev, I. V.; Klein, O. I.; Kulikova, N. A.; Stepanova, E. V.; Koroleva, O. V.

    2009-04-01

    Introduction Recently, many farmers have converted to low-disturbance tillage land cultivation as disk or plow fields can result in water and wind erosion of soil. So, crop residue and plant crowns and roots are left to hold the soil. However, low-disturbance tillage can be a challenge to manage since the key to crop production still requires good seed-to-soil contact. Therefore, decomposition of stubble in agricultural soils in situ is an issue of the day of modern agriculture. The aim of the present study was to compare different organic and inorganic fertilizer supplements on decomposition of cereal stubble remains in chernozem soil. Materials and methods Field trials were conducted in Krasnodar region, Russia. To promote stubble decomposition, a biopreparation that was cultural liquid obtained during cultivation of white-rot fungi Coriolus hirsutus 075 (Wulf Ex. Fr.) Quel. was used at the dosage of 150 ml/ha. The other tested supplements included ammonium nitrate (34 kg/ha), commercially available humate LignohumateTM (0.2 kg/ha) and combination of Lignohumate and biopreparation. Test plots were treated once after wheat harvesting. Non-treated ploughed plot was used as a blank. Soil samples were collected within 2 and 14 weeks after soil treatment. To control soil potential for stubble remains decomposition enzymatic activity is soil was determined. To perform soil analysis, stubble remains were carefully separated from soils followed by soil extraction with 0.14 M phosphate buffer pH 7.1 and analysis of the extracts for laccase and peroxidase activities [1,2]. Estimation of stubble decomposition in soil was performed by cellulose contents determination [3]. Results and discussion The obtained results demonstrated after 14 weeks of treatment increase of soil enzymatic activity due to soil supplementation was observed. Introduction of ammonium nitrate resulted in 108% of peroxidise activity as compared to blank. That value for Lignohumate variant was estimated

  9. A decomposition method for network-constrained unit commitment with AC power flow constraints

    International Nuclear Information System (INIS)

    Bai, Yang; Zhong, Haiwang; Xia, Qing; Kang, Chongqing; Xie, Le

    2015-01-01

    To meet the increasingly high requirement of smart grid operations, considering AC power flow constraints in the NCUC (network-constrained unit commitment) is of great significance in terms of both security and economy. This paper proposes a decomposition method to solve NCUC with AC power flow constraints. With conic approximations of the AC power flow equations, the master problem is formulated as a MISOCP (mixed integer second-order cone programming) model. The key advantage of this model is that the active power and reactive power are co-optimised, and the transmission losses are considered. With the AC optimal power flow model, the AC feasibility of the UC result of the master problem is checked in subproblems. If infeasibility is detected, feedback constraints are generated based on the sensitivity of bus voltages to a change in the unit reactive power generation. They are then introduced into the master problem in the next iteration until all AC violations are eliminated. A 6-bus system, a modified IEEE 30-bus system and the IEEE 118-bus system are used to validate the performance of the proposed method, which provides a satisfactory solution with approximately 44-fold greater computational efficiency. - Highlights: • A decomposition method is proposed to solve the NCUC with AC power flow constraints • The master problem considers active power, reactive power and transmission losses. • OPF-based subproblems check the AC feasibility using parallel computing techniques. • An effective feedback constraint interacts between the master problem and subproblem. • Computational efficiency is significantly improved with satisfactory accuracy

  10. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  11. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  12. Time space domain decomposition methods for reactive transport - Application to CO2 geological storage

    International Nuclear Information System (INIS)

    Haeberlein, F.

    2011-01-01

    Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)

  13. Spectral decomposition of single-tone-driven quantum phase modulation

    International Nuclear Information System (INIS)

    Capmany, Jose; Fernandez-Pousa, Carlos R

    2011-01-01

    Electro-optic phase modulators driven by a single radio-frequency tone Ω can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of ℎΩ. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F 1 , the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F 1 is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  14. Spectral decomposition of single-tone-driven quantum phase modulation

    Energy Technology Data Exchange (ETDEWEB)

    Capmany, Jose [ITEAM Research Institute, Univ. Politecnica de Valencia, 46022 Valencia (Spain); Fernandez-Pousa, Carlos R, E-mail: c.pousa@umh.es [Signal Theory and Communications, Department of Physics and Computer Science, Univ. Miguel Hernandez, 03202 Elche (Spain)

    2011-02-14

    Electro-optic phase modulators driven by a single radio-frequency tone {Omega} can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of {h_bar}{Omega}. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F{sub 1}, the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F{sub 1} is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  15. Dominant pole placement with fractional order PID controllers: D-decomposition approach.

    Science.gov (United States)

    Mandić, Petar D; Šekara, Tomislav B; Lazarević, Mihailo P; Bošković, Marko

    2017-03-01

    Dominant pole placement is a useful technique designed to deal with the problem of controlling a high order or time-delay systems with low order controller such as the PID controller. This paper tries to solve this problem by using D-decomposition method. Straightforward analytic procedure makes this method extremely powerful and easy to apply. This technique is applicable to a wide range of transfer functions: with or without time-delay, rational and non-rational ones, and those describing distributed parameter systems. In order to control as many different processes as possible, a fractional order PID controller is introduced, as a generalization of classical PID controller. As a consequence, it provides additional parameters for better adjusting system performances. The design method presented in this paper tunes the parameters of PID and fractional PID controller in order to obtain good load disturbance response with a constraint on the maximum sensitivity and sensitivity to noise measurement. Good set point response is also one of the design goals of this technique. Numerous examples taken from the process industry are given, and D-decomposition approach is compared with other PID optimization methods to show its effectiveness. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Industrial Application of Topology Optimization for Combined Conductive and Convective Heat Transfer Problems

    DEFF Research Database (Denmark)

    Zhou, Mingdong; Alexandersen, Joe; Sigmund, Ole

    2016-01-01

    This paper presents an industrial application of topology optimization for combined conductive and convective heat transfer problems. The solution is based on a synergy of computer aided design and engineering software tools from Dassault Systemes. The considered physical problem of steady......-state heat transfer under convection is simulated using SIMULIA-Abaqus. A corresponding topology optimization feature is provided by SIMULIA-Tosca. By following a standard workflow of design optimization, the proposed solution is able to accommodate practical design scenarios and results in efficient...

  17. Decomposition Technique for Remaining Useful Life Prediction

    Science.gov (United States)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  18. Approximate motion integrals and the quantum chaos problem

    International Nuclear Information System (INIS)

    Bunakov, V.E.; Ivanov, I.B.

    2001-01-01

    One discusses the problem of occurrence and seek for the motion integrals in the stationary quantum mechanics and its relation to the quantum chaos. One studies decomposition of quantum numbers and derives the criterion of chaos. To seek the motion integrals one applies the convergence method. One derived the approximate integrals in the Hennone-Hales problem. One discusses the problem of compatibility of chaos and integrability [ru

  19. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  20. New simultaneous thermogravimetry and modulated molecular beam mass spectrometry apparatus for quantitative thermal decomposition studies

    International Nuclear Information System (INIS)

    Behrens, R. Jr.

    1987-01-01

    A new type of instrument has been designed and constructed to measure quantitatively the gas phase species evolving during thermal decompositions. These measurements can be used for understanding the kinetics of thermal decomposition, determining the heats of formation and vaporization of high-temperature materials, and analyzing sample contaminants. The new design allows measurements to be made on the same time scale as the rates of the reactions being studied, provides a universal detection technique to study a wide range of compounds, gives quantitative measurements of decomposition products, and minimizes interference from the instrument on the measurements. The instrument design is based on a unique combination of thermogravimetric analysis (TGA), differential thermal analysis (DTA), and modulated beam mass spectroscopy (MBMS) which are brought together into a symbiotic relationship through the use of differentially pumped vacuum systems, modulated molecular beam techniques, and computer control and data-acquisition systems. A data analysis technique that calculates partial pressures in the reaction cell from the simultaneous microbalance force measurements and the modulated mass spectrometry measurements has been developed. This eliminates the need to know the ionization cross section, the ion dissociation channels, the quadrupole transmission, and the ion detector sensitivity for each thermal decomposition product prior to quantifying the mass spectral data. The operation of the instrument and the data analysis technique are illustrated with the thermal decomposition of contaminants from a precipitated palladium powder

  1. Utilizing Problem Structure in Optimization of Radiation Therapy

    International Nuclear Information System (INIS)

    Carlsson, Fredrik

    2008-04-01

    In this thesis, optimization approaches for intensity-modulated radiation therapy are developed and evaluated with focus on numerical efficiency and treatment delivery aspects. The first two papers deal with strategies for solving fluence map optimization problems efficiently while avoiding solutions with jagged fluence profiles. The last two papers concern optimization of step-and-shoot parameters with emphasis on generating treatment plans that can be delivered efficiently and accurately. In the first paper, the problem dimension of a fluence map optimization problem is reduced through a spectral decomposition of the Hessian of the objective function. The weights of the eigenvectors corresponding to the p largest eigenvalues are introduced as optimization variables, and the impact on the solution of varying p is studied. Including only a few eigenvector weights results in faster initial decrease of the objective value, but with an inferior solution, compared to optimization of the bixel weights. An approach combining eigenvector weights and bixel weights produces improved solutions, but at the expense of the pre-computational time for the spectral decomposition. So-called iterative regularization is performed on fluence map optimization problems in the second paper. The idea is to find regular solutions by utilizing an optimization method that is able to find near-optimal solutions with non-jagged fluence profiles in few iterations. The suitability of a quasi-Newton sequential quadratic programming method is demonstrated by comparing the treatment quality of deliverable step-and-shoot plans, generated through leaf sequencing with a fixed number of segments, for different number of bixel-weight iterations. A conclusion is that over-optimization of the fluence map optimization problem prior to leaf sequencing should be avoided. An approach for dynamically generating multileaf collimator segments using a column generation approach combined with optimization of

  2. Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems

    International Nuclear Information System (INIS)

    Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.

    2001-01-01

    Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed

  3. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    Science.gov (United States)

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  4. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  5. Relationship of host recurrence in fungi to rates of tropical leaf decomposition

    Science.gov (United States)

    Mirna E. Santanaa; JeanD. Lodgeb; Patricia Lebowc

    2004-01-01

    Here we explore the significance of fungal diversity on ecosystem processes by testing whether microfungal ‘preferences’ for (i.e., host recurrence) different tropical leaf species increases the rate of decomposition. We used pairwise combinations of girradiated litter of five tree species with cultures of two dominant microfungi derived from each plant in a microcosm...

  6. Diversity has stronger top-down than bottom-up effects on decomposition.

    Science.gov (United States)

    Srivastava, Diane S; Cardinale, Bradley J; Downing, Amy L; Duffy, J Emmett; Jouseau, Claire; Sankaran, Mahesh; Wright, Justin P

    2009-04-01

    The flow of energy and nutrients between trophic levels is affected by both the trophic structure of food webs and the diversity of species within trophic levels. However, the combined effects of trophic structure and diversity on trophic transfer remain largely unknown. Here we ask whether changes in consumer diversity have the same effect as changes in resource diversity on rates of resource consumption. We address this question by focusing on consumer-resource dynamics for the ecologically important process of decomposition. This study compares the top-down effect of consumer (detritivore) diversity on the consumption of dead organic matter (decomposition) with the bottom-up effect of resource (detrital) diversity, based on a compilation of 90 observations reported in 28 studies. We did not detect effects of either detrital or consumer diversity on measures of detrital standing stock, and effects on consumer standing stock were equivocal. However, our meta-analysis indicates that reductions in detritivore diversity result in significant reductions in the rate of decomposition. Detrital diversity has both positive and negative effects on decomposition, with no overall trend. This difference between top-down and bottom-up effects of diversity is robust to different effect size metrics and could not be explained by differences in experimental systems or designs between detritivore and detrital manipulations. Our finding that resource diversity has no net effect on consumption in "brown" (detritus-consumer) food webs contrasts with previous findings from "green" (plant-herbivore) food webs and suggests that effects of plant diversity on consumption may fundamentally change after plant death.

  7. Relaxations to Sparse Optimization Problems and Applications

    Science.gov (United States)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we

  8. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    Science.gov (United States)

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  9. Microbiological decomposition of bagasse after radiation pasteurization

    International Nuclear Information System (INIS)

    Ito, Hitoshi; Ishigaki, Isao

    1987-01-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)

  10. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  11. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  12. An Integrated Approach to the Ground Crew Rostering Problem with Work Patterns

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Hansen, Anders Dohn; Range, Troels Martin

    This paper addresses the Ground Crew Rostering Problem with Work Patterns, an important manpower planning problem arising in the ground operations of airline companies. We present a cutting stock based integer programming formulation of the problem and describe a powerful decomposition approach...

  13. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  14. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  15. Domain decomposition and multilevel integration for fermions

    International Nuclear Information System (INIS)

    Ce, Marco; Giusti, Leonardo; Schaefer, Stefan

    2016-01-01

    The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.

  16. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  17. The Core Problem within a Linear Approximation Problem $AX/approx B$ with Multiple Right-Hand Sides

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, Martin; Strakoš, Z.

    2013-01-01

    Roč. 34, č. 3 (2013), s. 917-931 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Grant - others:GA ČR(CZ) GA201/09/0917; GA MŠk(CZ) EE2.3.09.0155; GA MŠk(CZ) EE2.3.30.0065 Program:GA Institutional support: RVO:67985807 Keywords : total least squares problem * multiple right-hand sides * core problem * linear approximation problem * error-in-variables modeling * orthogonal regression * singular value decomposition Subject RIV: BA - General Mathematics Impact factor: 1.806, year: 2013

  18. Sensitivity of decomposition rates of soil organic matter with respect to simultaneous changes in temperature and moisture

    Science.gov (United States)

    Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.

    2015-03-01

    The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.

  19. Detection of Crossing White Matter Fibers with High-Order Tensors and Rank-k Decompositions

    KAUST Repository

    Jiao, Fangxiang; Gur, Yaniv; Johnson, Chris R.; Joshi, Sarang

    2011-01-01

    Fundamental to high angular resolution diffusion imaging (HARDI), is the estimation of a positive-semidefinite orientation distribution function (ODF) and extracting the diffusion properties (e.g., fiber directions). In this work we show that these two goals can be achieved efficiently by using homogeneous polynomials to represent the ODF in the spherical deconvolution approach, as was proposed in the Cartesian Tensor-ODF (CT-ODF) formulation. Based on this formulation we first suggest an estimation method for positive-semidefinite ODF by solving a linear programming problem that does not require special parameterization of the ODF. We also propose a rank-k tensor decomposition, known as CP decomposition, to extract the fibers information from the estimated ODF. We show that this decomposition is superior to the fiber direction estimation via ODF maxima detection as it enables one to reach the full fiber separation resolution of the estimation technique. We assess the accuracy of this new framework by applying it to synthetic and experimentally obtained HARDI data. © 2011 Springer-Verlag.

  20. Kinetics of thermal decomposition of aluminium hydride: I-non-isothermal decomposition under vacuum and in inert atmosphere (argon)

    International Nuclear Information System (INIS)

    Ismail, I.M.K.; Hawkins, T.

    2005-01-01

    Recently, interest in aluminium hydride (alane) as a rocket propulsion ingredient has been renewed due to improvements in its manufacturing process and an increase in thermal stability. When alane is added to solid propellant formulations, rocket performance is enhanced and the specific impulse increases. Preliminary work was performed at AFRL on the characterization and evaluation of two alane samples. Decomposition kinetics were determined from gravimetric TGA data and volumetric vacuum thermal stability (VTS) results. Chemical analysis showed the samples had 88.30% (by weight) aluminium and 9.96% hydrogen. The average density, as measured by helium pycnometery, was 1.486 g/cc. Scanning electron microscopy showed that the particles were mostly composed of sharp edged crystallographic polyhedral such as simple cubes, cubic octahedrons and hexagonal prisms. Thermogravimetric analysis was utilized to investigate the decomposition kinetics of alane in argon atmosphere and to shed light on the mechanism of alane decomposition. Two kinetic models were successfully developed and used to propose a mechanism for the complete decomposition of alane and to predict its shelf-life during storage. Alane decomposes in two steps. The slowest (rate-determining) step is solely controlled by solid state nucleation of aluminium crystals; the fastest step is due to growth of the crystals. Thus, during decomposition, hydrogen gas is liberated and the initial polyhedral AlH 3 crystals yield a final mix of amorphous aluminium and aluminium crystals. After establishing the kinetic model, prediction calculations indicated that alane can be stored in inert atmosphere at temperatures below 10 deg. C for long periods of time (e.g., 15 years) without significant decomposition. After 15 years of storage, the kinetic model predicts ∼0.1% decomposition, but storage at higher temperatures (e.g. 30 deg. C) is not recommended

  1. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Seshadhri, Comandur [The Ohio State Univ., Columbus, OH (United States); Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sariyuce, Ahmet Erdem [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States)

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  2. Base catalyzed decomposition of toxic and hazardous chemicals

    International Nuclear Information System (INIS)

    Rogers, C.J.; Kornel, A.; Sparks, H.L.

    1991-01-01

    There are vast amounts of toxic and hazardous chemicals, which have pervaded our environment during the past fifty years, leaving us with serious, crucial problems of remediation and disposal. The accumulation of polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), ''dioxins'' and pesticides in soil sediments and living systems is a serious problem that is receiving considerable attention concerning the cancer-causing nature of these synthetic compounds.US EPA scientists developed in 1989 and 1990 two novel chemical Processes to effect the dehalogenation of chlorinated solvents, PCBs, PCDDs, PCDFs, PCP and other pollutants in soil, sludge, sediment and liquids. This improved technology employs hydrogen as a nucleophile to replace halogens on halogenated compounds. Hydrogen as nucleophile is not influenced by steric hinderance as with other nucleophile where complete dehalogenation of organohalogens can be achieved. This report discusses catalyzed decomposition of toxic and hazardous chemicals

  3. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  4. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  5. Closed form solution to a second order boundary value problem and its application in fluid mechanics

    International Nuclear Information System (INIS)

    Eldabe, N.T.; Elghazy, E.M.; Ebaid, A.

    2007-01-01

    The Adomian decomposition method is used by many researchers to investigate several scientific models. In this Letter, the modified Adomian decomposition method is applied to construct a closed form solution for a second order boundary value problem with singularity

  6. Radiation decomposition of alcohols and chloro phenols in micellar systems; Descomposicion por irradiacion de alcoholes y clorofenoles en sistemas micelares

    Energy Technology Data Exchange (ETDEWEB)

    Moreno A, J

    1999-12-31

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  7. Radiation decomposition of alcohols and chloro phenols in micellar systems; Descomposicion por irradiacion de alcoholes y clorofenoles en sistemas micelares

    Energy Technology Data Exchange (ETDEWEB)

    Moreno A, J

    1998-12-31

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  8. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    Science.gov (United States)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  9. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  10. Functional decomposition with an efficient input support selection for sub-functions based on information relationship measures

    NARCIS (Netherlands)

    Rawski, M.; Jozwiak, L.; Luba, T.

    2001-01-01

    The functional decomposition of binary and multi-valued discrete functions and relations has been gaining more and more recognition. It has important applications in many fields of modern digital system engineering, such as combinational and sequential logic synthesis for VLSI systems, pattern

  11. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    Science.gov (United States)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  12. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  13. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  14. Numerical experiments using CHIEF to treat the nonuniqueness in solving acoustic axisymmetric exterior problems via boundary integral equations

    Directory of Open Access Journals (Sweden)

    Adel A.K. Mohsen

    2010-07-01

    Full Text Available The problem of nonuniqueness (NU of the solution of exterior acoustic problems via boundary integral equations is discussed in this article. The efficient implementation of the CHIEF (Combined Helmholtz Integral Equations Formulation method to axisymmetric problems is studied. Interior axial fields are used to indicate the solution error and to select proper CHIEF points. The procedure makes full use of LU-decomposition as well as the forward solution derived in the solution. Implementations of the procedure for hard spheres are presented. Accurate results are obtained up to a normalised radius of ka = 20.983, using only one CHIEF point. The radiation from a uniformly vibrating sphere is also considered. Accurate results for ka up to 16.927 are obtained using two CHIEF points.

  15. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  16. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  17. Decomposition mechanism of trichloroethylene based on by-product distribution in the hybrid barrier discharge plasma process

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang-Bo [Industry Applications Research Laboratory, Korea Electrotechnology Research Institute, Changwon, Kyeongnam (Korea, Republic of); Oda, Tetsuji [Department of Electrical Engineering, The University of Tokyo, Tokyo 113-8656 (Japan)

    2007-05-15

    The hybrid barrier discharge plasma process combined with ozone decomposition catalysts was studied experimentally for decomposing dilute trichloroethylene (TCE). Based on the fundamental experiment for catalytic activities on ozone decomposition, MnO{sub 2} was selected for application in the main experiments for its higher catalytic abilities than other metal oxides. A lower initial TCE concentration existed in the working gas; the larger ozone concentration was generated from the barrier discharge plasma treatment. Near complete decomposition of dichloro-acetylchloride (DCAC) into Cl{sub 2} and CO{sub x} was observed for an initial TCE concentration of less than 250 ppm. C=C {pi} bond cleavage in TCE gave a carbon single bond of DCAC through oxidation reaction during the barrier discharge plasma treatment. Those DCAC were easily broken in the subsequent catalytic reaction. While changing oxygen concentration in working gas, oxygen radicals in the plasma space strongly reacted with precursors of DCAC compared with those of trichloro-acetaldehyde. A chlorine radical chain reaction is considered as a plausible decomposition mechanism in the barrier discharge plasma treatment. The potential energy of oxygen radicals at the surface of the catalyst is considered as an important factor in causing reactive chemical reactions.

  18. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  19. Domain of composition and finite volume schemes on non-matching grids; Decomposition de domaine et schemas volumes finis sur maillages non-conformes

    Energy Technology Data Exchange (ETDEWEB)

    Saas, L.

    2004-05-01

    This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)

  20. Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm

    Science.gov (United States)

    Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam

    2017-04-01

    The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.

  1. Students' Thinking Process in Solving Combination Problems Considered from Assimilation and Accommodation Framework

    Science.gov (United States)

    Jalan, Sukoriyanto; Nusantara, Toto; Subanji, Subanji; Chandra, Tjang Daniel

    2016-01-01

    This study aims to explain the thinking process of students in solving combination problems considered from assimilation and accommodation frameworks. This research used a case study approach by classifying students into three categories of capabilities namely high, medium and low capabilities. From each of the ability categories, one student was…

  2. Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media

    KAUST Repository

    Galvis, Juan; Efendiev, Yalchin

    2010-01-01

    In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.

  3. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  4. Shock-tube study of the decomposition of tetramethylsilane using gas chromatography and high-repetition-rate time-of-flight mass spectrometry.

    Science.gov (United States)

    Sela, P; Peukert, S; Herzler, J; Fikri, M; Schulz, C

    2018-04-25

    The decomposition of tetramethylsilane was studied in shock-tube experiments in a temperature range of 1270-1580 K and pressures ranging from 1.5 to 2.3 bar behind reflected shock waves combining gas chromatography/mass spectrometry (GC/MS) and high-repetition-rate time-of-flight mass spectrometry (HRR-TOF-MS). The main observed products were methane (CH4), ethylene (C2H4), ethane (C2H6), and acetylene (C2H2). In addition, the formation of a solid deposit was observed, which was identified to consist of silicon- and carbon-containing nanoparticles. A kinetics sub-mechanism with 13 silicon species and 20 silicon-containing reactions was developed. It was combined with the USC_MechII mechanism for hydrocarbons, which was able to simulate the experimental observations. The main decomposition channel of TMS is the Si-C bond scission forming methyl (CH3) and trimethylsilyl radicals (Si(CH3)3). The rate constant for TMS decomposition is represented by the Arrhenius expression ktotal[TMS → products] = 5.9 × 1012 exp(-267 kJ mol-1/RT) s-1.

  5. Differential contribution of soil biota groups to plant litter decomposition as mediated by soil use

    Science.gov (United States)

    Falco, Liliana B.; Sandler, Rosana V.; Coviella, Carlos E.

    2015-01-01

    Plant decomposition is dependant on the activity of the soil biota and its interactions with climate, soil properties, and plant residue inputs. This work assessed the roles of different groups of the soil biota on litter decomposition, and the way they are modulated by soil use. Litterbags of different mesh sizes for the selective exclusion of soil fauna by size (macro, meso, and microfauna) were filled with standardized dried leaves and placed on the same soil under different use intensities: naturalized grasslands, recent agriculture, and intensive agriculture fields. During five months, litterbags of each mesh size were collected once a month per system with five replicates. The remaining mass was measured and decomposition rates calculated. Differences were found for the different biota groups, and they were dependant on soil use. Within systems, the results show that in the naturalized grasslands, the macrofauna had the highest contribution to decomposition. In the recent agricultural system it was the combined activity of the macro- and mesofauna, and in the intensive agricultural use it was the mesofauna activity. These results underscore the relative importance and activity of the different groups of the edaphic biota and the effects of different soil uses on soil biota activity. PMID:25780777

  6. Differential contribution of soil biota groups to plant litter decomposition as mediated by soil use

    Directory of Open Access Journals (Sweden)

    Ricardo A. Castro-Huerta

    2015-03-01

    Full Text Available Plant decomposition is dependant on the activity of the soil biota and its interactions with climate, soil properties, and plant residue inputs. This work assessed the roles of different groups of the soil biota on litter decomposition, and the way they are modulated by soil use. Litterbags of different mesh sizes for the selective exclusion of soil fauna by size (macro, meso, and microfauna were filled with standardized dried leaves and placed on the same soil under different use intensities: naturalized grasslands, recent agriculture, and intensive agriculture fields. During five months, litterbags of each mesh size were collected once a month per system with five replicates. The remaining mass was measured and decomposition rates calculated. Differences were found for the different biota groups, and they were dependant on soil use. Within systems, the results show that in the naturalized grasslands, the macrofauna had the highest contribution to decomposition. In the recent agricultural system it was the combined activity of the macro- and mesofauna, and in the intensive agricultural use it was the mesofauna activity. These results underscore the relative importance and activity of the different groups of the edaphic biota and the effects of different soil uses on soil biota activity.

  7. Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition

    International Nuclear Information System (INIS)

    Clerc, S.

    1998-01-01

    In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)

  8. Can we infer post mortem interval on the basis of decomposition rate? A case from a Portuguese cemetery.

    Science.gov (United States)

    Ferreira, M Teresa; Cunha, Eugénia

    2013-03-10

    Post mortem interval estimation is crucial in forensic sciences for both positive identification and reconstruction of perimortem events. However, reliable dating of skeletonized remains poses a scientific challenge since human remains decomposition involves a set of complex and highly variable processes. Many of the difficulties in determining post mortem interval and/or the permanence of a body in a specific environment relates with the lack of systematic observations and research in human body decomposition modalities in different environments. In March 2006, in order to solve a problem of misidentification, a team of the South Branch of Portuguese National Institute of Legal Medicine carried out the exhumation of 25 identified individuals buried for almost five years in the same cemetery plot. Even though all individuals shared similar post mortem intervals, they presented different stages of decomposition. In order to analyze the post mortem factors associated with the different stages of decomposition displayed by the 25 exhumed individuals, the stages of decomposition were scored. Information regarding age at death and sex of the individuals were gathered and recorded as well as data in the cause of death and grave and coffin characteristics. Although the observed distinct decay stages may be explained by the burial conditions, namely by the micro taphonomic environments, individual endogenous factors also play an important role on differential decomposition as witnessed by the present case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    International Nuclear Information System (INIS)

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-01-01

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm

  10. ARTIFICIAL NEURAL NETWORK AND WAVELET DECOMPOSITION IN THE FORECAST OF GLOBAL HORIZONTAL SOLAR RADIATION

    Directory of Open Access Journals (Sweden)

    Luiz Albino Teixeira Júnior

    2015-04-01

    Full Text Available This paper proposes a method (denoted by WD-ANN that combines the Artificial Neural Networks (ANN and the Wavelet Decomposition (WD to generate short-term global horizontal solar radiation forecasting, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. The WD-ANN method consists of two basic steps: firstly, it is performed the decomposition of level p of the time series of interest, generating p + 1 wavelet orthonormal components; secondly, the p + 1 wavelet orthonormal components (generated in the step 1 are inserted simultaneously into an ANN in order to generate short-term forecasting. The results showed that the proposed method (WD-ANN improved substantially the performance over the (traditional ANN method.

  11. Domain decomposition method for dynamic faulting under slip-dependent friction

    International Nuclear Information System (INIS)

    Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie

    2004-01-01

    The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)

  12. Wavelet Decomposition Method for $L_2/$/TV-Image Deblurring

    KAUST Repository

    Fornasier, M.

    2012-07-17

    In this paper, we show additional properties of the limit of a sequence produced by the subspace correction algorithm proposed by Fornasier and Schönlieb [SIAM J. Numer. Anal., 47 (2009), pp. 3397-3428 for L 2/TV-minimization problems. An important but missing property of such a limiting sequence in that paper is the convergence to a minimizer of the original minimization problem, which was obtained in [M. Fornasier, A. Langer, and C.-B. Schönlieb, Numer. Math., 116 (2010), pp. 645-685 with an additional condition of overlapping subdomains. We can now determine when the limit is indeed a minimizer of the original problem. Inspired by the work of Vonesch and Unser [IEEE Trans. Image Process., 18 (2009), pp. 509-523], we adapt and specify this algorithm to the case of an orthogonal wavelet space decomposition for deblurring problems and provide an equivalence condition to the convergence of such a limiting sequence to a minimizer. We also provide a counterexample of a limiting sequence by the algorithm that does not converge to a minimizer, which shows the necessity of our analysis of the minimizing algorithm. © 2012 Society for Industrial and Applied Mathematics.

  13. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  14. Oxidative degradation of low and intermediate level Radioactive organic wastes 2. Acid decomposition on spent Ion-Exchange resins

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, N K; Eskander, S B [Radioisotope dept., atomic energy authority, (Egypt)

    1995-10-01

    The present work provides a simplified, effective and economic method for the chemical decomposition of radioactively contaminated solid organic waste, especially spent ion - exchange resins. The goal is to achieve volume reduction and to avoid technical problems encountered in processes used for similar purposes (incineration, pyrolysis). Factors efficiency and kinetics of the oxidation of the ion exchange resins in acid medium using hydrogen peroxide as oxidant, namely, duration of treatment and the acid to resin ratio were studied systematically on a laboratory scale. Moreover the percent composition of the off-gas evolved during the decomposition process was analysed. 3 figs., 5 tabs.

  15. Oxidative degradation of low and intermediate level Radioactive organic wastes 2. Acid decomposition on spent Ion-Exchange resins

    International Nuclear Information System (INIS)

    Ghattas, N.K.; Eskander, S.B.

    1995-01-01

    The present work provides a simplified, effective and economic method for the chemical decomposition of radioactively contaminated solid organic waste, especially spent ion - exchange resins. The goal is to achieve volume reduction and to avoid technical problems encountered in processes used for similar purposes (incineration, pyrolysis). Factors efficiency and kinetics of the oxidation of the ion exchange resins in acid medium using hydrogen peroxide as oxidant, namely, duration of treatment and the acid to resin ratio were studied systematically on a laboratory scale. Moreover the percent composition of the off-gas evolved during the decomposition process was analysed. 3 figs., 5 tabs

  16. Crop residue decomposition in Minnesota biochar amended plots

    OpenAIRE

    S. L. Weyers; K. A. Spokas

    2014-01-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with ...

  17. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  18. 1.7. Acid decomposition of kaolin clays of Ziddi Deposit. 1.7.1. The hydrochloric acid decomposition of kaolin clays and siallites

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Mirzoev, D.Kh.; Boboev, Kh.E.

    2016-01-01

    Present article of book is devoted to hydrochloric acid decomposition of kaolin clays and siallites. The chemical composition of kaolin clays and siallites was determined. The influence of temperature, process duration, acid concentration on hydrochloric acid decomposition of kaolin clays and siallites was studied. The optimal conditions of hydrochloric acid decomposition of kaolin clays and siallites were determined.

  19. Domain decomposition with local refinement for flow simulation around a nuclear waste disposal site: direct computation versus simulation using code coupling with OCamlP3L

    Energy Technology Data Exchange (ETDEWEB)

    Clement, F.; Vodicka, A.; Weis, P. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Martin, V. [Institut National de Recherches Agronomiques (INRA), 92 - Chetenay Malabry (France); Di Cosmo, R. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Paris-7 Univ., 75 (France)

    2003-07-01

    We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)

  20. Domain decomposition with local refinement for flow simulation around a nuclear waste disposal site: direct computation versus simulation using code coupling with OCamlP3L

    International Nuclear Information System (INIS)

    Clement, F.; Vodicka, A.; Weis, P.; Martin, V.; Di Cosmo, R.

    2003-01-01

    We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)

  1. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  2. Empowerment of Students Critical Thinking Skills Through Implementation of Think Talk Write Combined Problem Based Learning

    OpenAIRE

    Yanuarta, Lidya; Gofur, Abdul; Indriwati, Sri Endah

    2016-01-01

    Critical thinking is a complex reflection process that helps individuals become more analytical in their thinking. Empower critical thinking in students need to be done so that students can resolve the problems that exist in their life and are able to apply alternative solutions to problems in a different situations. Therefore, Think Talk Write (TTW) combined Problem Based Learning (PBL) were needed to empowered the critical thinking skills so that students were able to face the challenges of...

  3. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    Energy Technology Data Exchange (ETDEWEB)

    Zentner, I. [IMSIA, UMR EDF-ENSTA-CNRS-CEA 9219, Université Paris-Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau Cedex (France); Ferré, G., E-mail: gregoire.ferre@ponts.org [CERMICS – Ecole des Ponts ParisTech, 6 et 8 avenue Blaise Pascal, Cité Descartes, Champs sur Marne, 77455 Marne la Vallée Cedex 2 (France); Poirion, F. [Department of Structural Dynamics and Aeroelasticity, ONERA, BP 72, 29 avenue de la Division Leclerc, 92322 Chatillon Cedex (France); Benoit, M. [Institut de Recherche sur les Phénomènes Hors Equilibre (IRPHE), UMR 7342 (CNRS, Aix-Marseille Université, Ecole Centrale Marseille), 49 rue Frédéric Joliot-Curie, BP 146, 13384 Marseille Cedex 13 (France)

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated by applications to earthquakes (seismic ground motion) and sea states (wave heights).

  4. Multi hollow needle to plate plasmachemical reactor for pollutant decomposition

    International Nuclear Information System (INIS)

    Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.

    2001-01-01

    Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)

  5. A handbook of decomposition methods in analytical chemistry

    International Nuclear Information System (INIS)

    Bok, R.

    1984-01-01

    Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references

  6. A combined analytic-numeric approach for some boundary-value problems

    Directory of Open Access Journals (Sweden)

    Mustafa Turkyilmazoglu

    2016-02-01

    Full Text Available A combined analytic-numeric approach is undertaken in the present work for the solution of boundary-value problems in the finite or semi-infinite domains. Equations to be treated arise specifically from the boundary layer analysis of some two and three-dimensional flows in fluid mechanics. The purpose is to find quick but accurate enough solutions. Taylor expansions at either boundary conditions are computed which are next matched to the other asymptotic or exact boundary conditions. The technique is applied to the well-known Blasius as well as Karman flows. Solutions obtained in terms of series compare favorably with the existing ones in the literature.

  7. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    Science.gov (United States)

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  8. Industrial and simulation analysis of the nitrogen trichloride decomposition process in electrolytic chlorine production

    International Nuclear Information System (INIS)

    Tavares Neto, J.I.H.; Brito, K.D.; Vasconcelos, L.G.S.; Alves, J.J.N.; Fossy, M.F.; Brito, R.P.

    2007-01-01

    This work presents the dynamic simulation of the thermal decomposition of nitrogen trichloride (NCl 3 ) during electrolytic chlorine (Cl 2 ) production, using an industrial plant as a case study. NCl 3 is an extremely unstable and explosive compound and the decomposition process has the following main problems: changeability of the reactor temperature and loss of solvent. The results of this work will be used to establish a more efficient and safe control strategy and to analyze the loss of solvent during the dynamic period. The implemented model will also be used to study the use of a new solvent, considering that currently used solvent will be prohibited from commercial use in 2010. The process was simulated by using the commercial simulator Aspen TM and the simulations were validated with plant data. From the results of the simulation it can be concluded that the rate of decomposition depends strongly on the temperature of the reactor, which has a stronger relationship to the liquid Cl 2 (reflux) and gaseous Cl 2 flow rates which feed the system. The results also showed that the loss of solvent changes strongly during the dynamic period

  9. Microwave-assisted versus conventional decomposition procedures applied to a ceramic potsherd standard reference material by inductively coupled plasma atomic emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Papadopoulou, D.N.; Zachariadis, G.A.; Anthemidis, A.N.; Tsirliganis, N.C.; Stratis, J.A

    2004-03-03

    Inductively coupled plasma atomic emission spectrometry (ICP-AES) is a powerful, sensitive analytical technique with numerous applications in chemical characterization including that of ancient pottery, mainly due to its multi-element character, and the relatively short time required for the analysis. A critical step in characterization studies of ancient pottery is the selection of a suitable decomposition procedure for the ceramic matrix. The current work presents the results of a comparative study of six decomposition procedures applied on a standard ceramic potsherd reference material, SARM 69. The investigated decomposition procedures included three microwave-assisted decomposition procedures, one wet decomposition (WD) procedure by conventional heating, one combined microwave-assisted and conventional heating WD procedure, and one fusion procedure. Chemical analysis was carried out by ICP-AES. Five major (Si, Al, Fe, Ca, Mg), three minor (Mn, Ba, Ti) and two trace (Cu, Co) elements were determined and compared with their certified values. Quantitation was performed at two different spectral lines for each element and multi-element matrix-matched calibration standards were used. The recovery values for the six decomposition procedures ranged between 75 and 110% with a few notable exceptions. Data were processed statistically in order to evaluate the investigated decomposition procedures in terms of recovery, accuracy and precision, and eventually select the most appropriate one for ancient pottery analysis.

  10. Non-invasive quantitative pulmonary V/Q imaging using Fourier decomposition MRI at 1.5T

    Energy Technology Data Exchange (ETDEWEB)

    Kjoerstad, Aasmund; Corteville, Dominique M.R.; Zoellner, Frank G.; Schad, Lothar R. [Heidelberg Univ., Medical Faculty Mannheim (Germany). Computer Assisted Clinical Medicine; Henzler, Thomas [Heidelberg Univ., Medical Faculty Mannheim (Germany). Inst. of Clinical Radiology and Nuclear Medicine; Schmid-Bindert, Gerald [Heidelberg Univ., Medical Faculty Mannheim (Germany). Interdisciplinary Thoracic Oncology

    2015-07-01

    Techniques for quantitative pulmonary perfusion and ventilation using the Fourier Decomposition method were recently demonstrated. We combine these two techniques and show that ventilation-perfusion (V/Q) imaging is possible using only a single MR acquisition of less than thirty seconds. The Fourier Decomposition method is used in combination with two quantification techniques, which extract baselines from within the images themselves and thus allows quantification. For the perfusion, a region assumed to consist of 100% blood is utilized, while for the ventilation the zero-frequency component is used. V/Q-imaging is then done by dividing the quantified ventilation map with the quantified perfusion map. The techniques were used on ten healthy volunteers and fifteen patients diagnosed with lung cancer. A mean V/Q-ratio of 1.15±0.22 was found for the healthy volunteers and a mean V/Q-ratio of 1.93±0.83 for the non-afflicted lung in the patients. Mean V/Q-ratio in the afflicted (tumor-bearing) lung was found to be 1.61±1.06. Functional defects were clearly visible in many of the patient images, but 5 of 15 patient images had to be excluded due to artifacts or low SNR, indicating a lack of robustness. Conclusion Non-invasive, quantitative V/Q-imaging is possible using Fourier Decomposition MRI. The method requires only a single acquisition of less than 30 seconds, but robustness in patients remains an issue.

  11. Non-invasive quantitative pulmonary V/Q imaging using Fourier decomposition MRI at 1.5T.

    Science.gov (United States)

    Kjørstad, Åsmund; Corteville, Dominique M R; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R

    2015-12-01

    Techniques for quantitative pulmonary perfusion and ventilation using the Fourier Decomposition method were recently demonstrated. We combine these two techniques and show that ventilation-perfusion (V/Q) imaging is possible using only a single MR acquisition of less than thirty seconds. The Fourier Decomposition method is used in combination with two quantification techniques, which extract baselines from within the images themselves and thus allows quantification. For the perfusion, a region assumed to consist of 100% blood is utilized, while for the ventilation the zero-frequency component is used. V/Q-imaging is then done by dividing the quantified ventilation map with the quantified perfusion map. The techniques were used on ten healthy volunteers and fifteen patients diagnosed with lung cancer. A mean V/Q-ratio of 1.15 ± 0.22 was found for the healthy volunteers and a mean V/Q-ratio of 1.93 ± 0.83 for the non-afflicted lung in the patients. Mean V/Q-ratio in the afflicted (tumor-bearing) lung was found to be 1.61 ± 1.06. Functional defects were clearly visible in many of the patient images, but 5 of 15 patient images had to be excluded due to artifacts or low SNR, indicating a lack of robustness. Non-invasive, quantitative V/Q-imaging is possible using Fourier Decomposition MRI. The method requires only a single acquisition of less than 30 seconds, but robustness in patients remains an issue. Copyright © 2015. Published by Elsevier GmbH.

  12. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  13. TH-A-18C-07: Noise Suppression in Material Decomposition for Dual-Energy CT

    International Nuclear Information System (INIS)

    Dong, X; Petrongolo, M; Wang, T; Zhu, L

    2014-01-01

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on a calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future

  14. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  15. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    Science.gov (United States)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual

  16. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  17. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  18. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  19. Decomposition of the Gender Wage Gap Using Matching: An Application for Switzerland

    OpenAIRE

    Dragana Djurdjevic; Sergiy Radyakin

    2007-01-01

    In this paper, we investigate the gender wage differentials for Switzerland. Using micro data from the Swiss Labour Force Survey, we apply a matching method to decompose the wage gap in Switzerland. Compared to the traditional Oaxaca-Blinder decomposition, this nonparametric technique does not require any estimation of wage equations and accounts for wage differences that can be due to differences in the support. Our estimation results show that the problem of gender differences in the suppor...

  20. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  1. Dimensional analysis and qualitative methods in problem solving: II

    International Nuclear Information System (INIS)

    Pescetti, D

    2009-01-01

    We show that the underlying mathematical structure of dimensional analysis (DA), in the qualitative methods in problem-solving context, is the algebra of the affine spaces. In particular, we show that the qualitative problem-solving procedure based on the parallel decomposition of a problem into simple special cases yields the new original mathematical concepts of special points and special representations of affine spaces. A qualitative problem-solving algorithm piloted by the mathematics of DA is illustrated by a set of examples.

  2. Decomposition of forest products buried in landfills

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.

    2013-01-01

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  3. Decomposition of forest products buried in landfills

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)

    2013-11-15

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  4. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  5. Hybrid and parallel domain-decomposition methods development to enable Monte Carlo for reactor analyses

    International Nuclear Information System (INIS)

    Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  6. NOx Direct Decomposition: Potentially Enhanced Thermodynamics and Kinetics on Chemically Modified Ferroelectric Surfaces

    Science.gov (United States)

    Kakekhani, Arvin; Ismail-Beigi, Sohrab

    2014-03-01

    NOx are regulated pollutants produced during automotive combustion. As part of an effort to design catalysts for NOx decomposition that operate in oxygen rich environment and permit greater fuel efficiency, we study chemistry of NOx on (001) ferroelectric surfaces. Changing the polarization at such surfaces modifies electronic properties and leads to switchable surface chemistry. Using first principles theory, our previous work has shown that addition of catalytic RuO2 monolayer on ferroelectric PbTiO3 surface makes direct decomposition of NO thermodynamically favorable for one polarization. Furthermore, the usual problem of blockage of catalytic sites by strong oxygen binding is overcome by flipping polarization that helps desorb the oxygen. We describe a thermodynamic cycle for direct NO decomposition followed by desorption of N2 and O2. We provide energy barriers and transition states for key steps of the cycle as well as describing their dependence on polarization direction. We end by pointing out how a switchable order parameter of substrate,in this case ferroelectric polarization, allows us to break away from some standard compromises for catalyst design(e.g. the Sabatier principle). This enlarges the set of potentially catalytic metals. Primary support from Toyota Motor Engineering and Manufacturing, North America, Inc.

  7. Singular solution of the Feller diffusion equation via a spectral decomposition

    Science.gov (United States)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  8. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Bo [William R. Wiley Environmental; Kowalski, Karol [William R. Wiley Environmental

    2017-08-11

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.

  9. Single and Combined Effects of Pesticide Seed Dressings and Herbicides on Earthworms, Soil Microorganisms, and Litter Decomposition.

    Science.gov (United States)

    Van Hoesel, Willem; Tiefenbacher, Alexandra; König, Nina; Dorn, Verena M; Hagenguth, Julia F; Prah, Urša; Widhalm, Theresia; Wiklicky, Viktoria; Koller, Robert; Bonkowski, Michael; Lagerlöf, Jan; Ratzenböck, Andreas; Zaller, Johann G

    2017-01-01

    Seed dressing, i.e., the treatment of crop seeds with insecticides and/or fungicides, aiming to protect seeds from pests and diseases, is widely used in conventional agriculture. During the growing season, those crop fields often receive additional broadband herbicide applications. However, despite this broad utilization, very little is known on potential side effects or interactions between these different pesticide classes on soil organisms. In a greenhouse pot experiment, we studied single and interactive effects of seed dressing of winter wheat ( Triticum aestivum L. var. Capo ) with neonicotinoid insecticides and/or strobilurin and triazolinthione fungicides and an additional one-time application of a glyphosate-based herbicide on the activity of earthworms, soil microorganisms, litter decomposition, and crop growth. To further address food-web interactions, earthworms were introduced to half of the experimental units as an additional experimental factor. Seed dressings significantly reduced the surface activity of earthworms with no difference whether insecticides or fungicides were used. Moreover, seed dressing effects on earthworm activity were intensified by herbicides (significant herbicide × seed dressing interaction). Neither seed dressings nor herbicide application affected litter decomposition, soil basal respiration, microbial biomass, or specific respiration. Seed dressing did also not affect wheat growth. We conclude that interactive effects on soil biota and processes of different pesticide classes should receive more attention in ecotoxicological research.

  10. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  11. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  12. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    Science.gov (United States)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  13. Formation of volatile decomposition products by self-radiolysis of tritiated thymidine

    International Nuclear Information System (INIS)

    Shiba, Kazuhiro; Mori, Hirofumi

    1997-01-01

    In order to estimate the internal exposure dose in an experiment using tritiated thymidine, the rate of volatile 3 H-decomposition of several tritiated thymidine samples was measured. The decomposition rate of (methyl- 3 H)thymidine in water was over 80% in less than one year after initial analysis. (methyl- 3 H)thymidine was decomposed into volatile and non-volatile 3 H-decomposition products. The ratio of volatile 3 H-decomposition products increased with increasing the rate of the decomposition of (methyl- 3 H) thymidine. The volatile 3 H-decomposition products consisted of two components, of which the main component was tritiated water. Internal exposure dose caused by the inhalation of such volatile 3 H-decomposition products of (methyl- 3 H) thymidine was assumed to be several μSv. (author)

  14. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  15. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  16. Numerical solution of large nonlinear boundary value problems by quadratic minimization techniques

    International Nuclear Information System (INIS)

    Glowinski, R.; Le Tallec, P.

    1984-01-01

    The objective of this paper is to describe the numerical treatment of large highly nonlinear two or three dimensional boundary value problems by quadratic minimization techniques. In all the different situations where these techniques were applied, the methodology remains the same and is organized as follows: 1) derive a variational formulation of the original boundary value problem, and approximate it by Galerkin methods; 2) transform this variational formulation into a quadratic minimization problem (least squares methods) or into a sequence of quadratic minimization problems (augmented lagrangian decomposition); 3) solve each quadratic minimization problem by a conjugate gradient method with preconditioning, the preconditioning matrix being sparse, positive definite, and fixed once for all in the iterative process. This paper will illustrate the methodology above on two different examples: the description of least squares solution methods and their application to the solution of the unsteady Navier-Stokes equations for incompressible viscous fluids; the description of augmented lagrangian decomposition techniques and their application to the solution of equilibrium problems in finite elasticity

  17. Thermal decomposition of lanthanide and actinide tetrafluorides

    International Nuclear Information System (INIS)

    Gibson, J.K.; Haire, R.G.

    1988-01-01

    The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs

  18. Thermal decomposition of UO3-2H20

    International Nuclear Information System (INIS)

    Flament, T.A.

    1998-01-01

    The first part of the report summarizes the literature data regarding the uranium trioxide water system. In the second part, the experimental aspects are presented. An experimental program has been set up to determine the steps and species involved in decomposition of uranium oxide di-hydrate. Particular attention has been paid to determine both loss of free water (moisture in the fuel) and loss of chemically bound water (decomposition of hydrates). The influence of water pressure on decomposition has been taken into account

  19. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  20. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  1. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    Science.gov (United States)

    Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; Martinez, Raul E.; Walker, Michael J.; Kreisberg, Nathan M.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-04-01

    Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA

  2. Wood decomposition as influenced by invertebrates.

    Science.gov (United States)

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  3. The static quark potential from the gauge independent Abelian decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Cundy, Nigel, E-mail: ndcundy@gmail.com [Lattice Gauge Theory Research Center, FPRD, and CTP, Department of Physics & Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of); Cho, Y.M. [Administration Building 310-4, Konkuk University, Seoul 143-701 (Korea, Republic of); Department of Physics & Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of); Lee, Weonjong; Leem, Jaehoon [Lattice Gauge Theory Research Center, FPRD, and CTP, Department of Physics & Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of)

    2015-06-15

    We investigate the relationship between colour confinement and the gauge independent Cho–Duan–Ge Abelian decomposition. The decomposition is defined in terms of a colour field n; the principle novelty of our study is that we have used a unique definition of this field in terms of the eigenvectors of the Wilson Loop. This allows us to establish an equivalence between the path-ordered integral of the non-Abelian gauge fields and an integral over an Abelian restricted gauge field which is tractable both theoretically and numerically in lattice QCD. We circumvent path ordering without requiring an additional path integral. By using Stokes' theorem, we can compute the Wilson Loop in terms of a surface integral over a restricted field strength, and show that the restricted field strength may be dominated by certain structures, which occur when one of the quantities parametrising the colour field n winds itself around a non-analyticity in the colour field. If they exist, these structures will lead to an area law scaling for the Wilson Loop and provide a mechanism for quark confinement. Unlike most studies of confinement using the Abelian decomposition, we do not rely on a dual-Meissner effect to create the inter-quark potential. We search for these structures in quenched lattice QCD. We perform the Abelian decomposition, and compare the electric and magnetic fields with the patterns expected theoretically. We find that the restricted field strength is dominated by objects which may be peaks of a single lattice spacing in size or extended string-like lines of electromagnetic flux. The objects are not isolated monopoles, as they generate electric fields in addition to magnetic fields, and the fields are not spherically symmetric, but may be either caused by a monopole/anti-monopole condensate, some other types of topological objects, or a combination of these. Removing these peaks removes the area law scaling of the string tension, suggesting that they are

  4. The static quark potential from the gauge independent Abelian decomposition

    Directory of Open Access Journals (Sweden)

    Nigel Cundy

    2015-06-01

    Full Text Available We investigate the relationship between colour confinement and the gauge independent Cho–Duan–Ge Abelian decomposition. The decomposition is defined in terms of a colour field n; the principle novelty of our study is that we have used a unique definition of this field in terms of the eigenvectors of the Wilson Loop. This allows us to establish an equivalence between the path-ordered integral of the non-Abelian gauge fields and an integral over an Abelian restricted gauge field which is tractable both theoretically and numerically in lattice QCD. We circumvent path ordering without requiring an additional path integral. By using Stokes' theorem, we can compute the Wilson Loop in terms of a surface integral over a restricted field strength, and show that the restricted field strength may be dominated by certain structures, which occur when one of the quantities parametrising the colour field n winds itself around a non-analyticity in the colour field. If they exist, these structures will lead to an area law scaling for the Wilson Loop and provide a mechanism for quark confinement. Unlike most studies of confinement using the Abelian decomposition, we do not rely on a dual-Meissner effect to create the inter-quark potential.We search for these structures in quenched lattice QCD. We perform the Abelian decomposition, and compare the electric and magnetic fields with the patterns expected theoretically. We find that the restricted field strength is dominated by objects which may be peaks of a single lattice spacing in size or extended string-like lines of electromagnetic flux. The objects are not isolated monopoles, as they generate electric fields in addition to magnetic fields, and the fields are not spherically symmetric, but may be either caused by a monopole/anti-monopole condensate, some other types of topological objects, or a combination of these. Removing these peaks removes the area law scaling of the string tension, suggesting that

  5. The static quark potential from the gauge independent Abelian decomposition

    Science.gov (United States)

    Cundy, Nigel; Cho, Y. M.; Lee, Weonjong; Leem, Jaehoon

    2015-06-01

    We investigate the relationship between colour confinement and the gauge independent Cho-Duan-Ge Abelian decomposition. The decomposition is defined in terms of a colour field n; the principle novelty of our study is that we have used a unique definition of this field in terms of the eigenvectors of the Wilson Loop. This allows us to establish an equivalence between the path-ordered integral of the non-Abelian gauge fields and an integral over an Abelian restricted gauge field which is tractable both theoretically and numerically in lattice QCD. We circumvent path ordering without requiring an additional path integral. By using Stokes' theorem, we can compute the Wilson Loop in terms of a surface integral over a restricted field strength, and show that the restricted field strength may be dominated by certain structures, which occur when one of the quantities parametrising the colour field n winds itself around a non-analyticity in the colour field. If they exist, these structures will lead to an area law scaling for the Wilson Loop and provide a mechanism for quark confinement. Unlike most studies of confinement using the Abelian decomposition, we do not rely on a dual-Meissner effect to create the inter-quark potential. We search for these structures in quenched lattice QCD. We perform the Abelian decomposition, and compare the electric and magnetic fields with the patterns expected theoretically. We find that the restricted field strength is dominated by objects which may be peaks of a single lattice spacing in size or extended string-like lines of electromagnetic flux. The objects are not isolated monopoles, as they generate electric fields in addition to magnetic fields, and the fields are not spherically symmetric, but may be either caused by a monopole/anti-monopole condensate, some other types of topological objects, or a combination of these. Removing these peaks removes the area law scaling of the string tension, suggesting that they are responsible for

  6. Radiolytic decomposition of 4-bromodiphenyl ether

    International Nuclear Information System (INIS)

    Tang Liang; Xu Gang; Wu Wenjing; Shi Wenyan; Liu Ning; Bai Yulei; Wu Minghong

    2010-01-01

    Polybrominated diphenyl ethers (PBDEs) spread widely in the environment are mainly removed by photochemical and anaerobic microbial degradation. In this paper, the decomposition of 4-bromodiphenyl ether (BDE -3), the PBDEs homologues, is investigated by electron beam irradiation of its ethanol/water solution (reduction system) and acetonitrile/water solution (oxidation system). The radiolytic products were determined by GC coupled with electron capture detector, and the reaction rate constant of e sol - in the reduction system was measured at 2.7 x 10 10 L · mol -1 · s -1 by pulsed radiolysis. The results show that the BDE-3 concentration affects strongly the decomposition ratio in the alkali solution, and the reduction system has a higher BDE-3 decomposition rate than the oxidation system. This indicates that the BDE-3 was reduced by effectively capturing e sol - in radiolytic process. (authors)

  7. Structural change of the physical economy. Decomposition analysis of physical and hybrid-unit input-output tables

    International Nuclear Information System (INIS)

    Hoekstra, R.

    2003-01-01

    Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)

  8. Structural change of the physical economy. Decomposition analysis of physical and hybrid-unit input-output tables

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, R.

    2003-10-01

    Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)

  9. Fe catalysts for methane decomposition to produce hydrogen and carbon nano materials

    KAUST Repository

    Zhou, Lu; Enakonda, Linga Reddy; Harb, Moussab; Saih, Youssef; Aguilar Tapia, Antonio; Ould-Chikh, Samy; Hazemann, Jean-louis; Li, Jun; Wei, Nini; Gary, Daniel; Del-Gallo, Pascal; Basset, Jean-Marie

    2017-01-01

    Conducting catalytic methane decomposition over Fe catalysts is a green and economic route to produce H2 without CO/CO2 contamination. Fused 65wt% and impregnated 20wt% Fe catalysts were synthesized with different additives to investigate their activity, whereas showing Fe-Al2O3 combination as the best catalyst. Al2O3 is speculated to expose more Fe00 for the selective deposition of carbon nano tubes (CNTs). A fused Fe (65wt%)-Al2O3 sample was further investigated by means of H2-TPR, in-situ XRD, HRTEM and XAS to conclude 750°C is the optimized temperature for H2 pre-reduction and reaction to obtain a high activity. Based on density functional theory (DFT) study, a reaction mechanism over Fe catalysts was proposed to explain the formation of graphite from unstable supersaturated iron carbides decomposition. A carbon deposition model was further proposed which explains the formation of different carbon nano materials.

  10. Fe catalysts for methane decomposition to produce hydrogen and carbon nano materials

    KAUST Repository

    Zhou, Lu

    2017-02-21

    Conducting catalytic methane decomposition over Fe catalysts is a green and economic route to produce H2 without CO/CO2 contamination. Fused 65wt% and impregnated 20wt% Fe catalysts were synthesized with different additives to investigate their activity, whereas showing Fe-Al2O3 combination as the best catalyst. Al2O3 is speculated to expose more Fe00 for the selective deposition of carbon nano tubes (CNTs). A fused Fe (65wt%)-Al2O3 sample was further investigated by means of H2-TPR, in-situ XRD, HRTEM and XAS to conclude 750°C is the optimized temperature for H2 pre-reduction and reaction to obtain a high activity. Based on density functional theory (DFT) study, a reaction mechanism over Fe catalysts was proposed to explain the formation of graphite from unstable supersaturated iron carbides decomposition. A carbon deposition model was further proposed which explains the formation of different carbon nano materials.

  11. Effects of terrestrial isopods (Crustacea: Oniscidea on leaf litter decomposition processes

    Directory of Open Access Journals (Sweden)

    Khaleid F. Abd El-Wakeil

    2015-03-01

    Full Text Available The leaf litter decomposition is carried out by the combined action of microorganisms and decomposer invertebrates such as earthworms, diplopods and isopods. The present work aimed to evaluate the impact of terrestrial isopod on leaf litter decomposition process. In Lab experimental food sources from oak and magnolia leaves litter were prepared. Air dried leaf litter were cut to 9 mm discs and sterilized in an autoclave then soaked in distilled water or water percolated through soil and left to decompose for 2, 4 and 6 weeks. 12 groups from two isopods species Porcellio scaber and Armadillidium vulgare, were prepared with each one containing 9 isopods. They were fed individually on the prepared food for 2 weeks. The prepared food differed in Carbon stable isotope ratio (δ13C, C%, N% and C/N ratios. At the end of the experiment, isopods were dissected and separated into gut, gut content and rest of the body. The δ13C for the prepared food, faecal pellets, remaining food, gut content, gut and rest of isopod were compared. The feeding activities of the two isopods were significantly different among isopods groups. Consumption and egestion ratios of magnolia leaf were higher than oak leaf. P. scaber consumed and egested litter higher than A. vulgare. The present results suggested that the impact of isopods and decomposition processes is species and litter specific.

  12. Comparison of decomposition rates between autopsied and non-autopsied human remains.

    Science.gov (United States)

    Bates, Lennon N; Wescott, Daniel J

    2016-04-01

    Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas

  13. The platinum catalysed decomposition of hydrazine in acidic media

    International Nuclear Information System (INIS)

    Ananiev, A.V.; Tananaev, I.G.; Brossard, Ph.; Broudic, J.C.

    2000-01-01

    Kinetic study of the hydrazine decomposition in the solutions of HClO 4 , H 2 SO 4 and HNO 3 in the presence of Pt/SiO 2 catalyst has been undertaken. It was shown that the kinetics of the hydrazine catalytic decomposition in HClO 4 and H 2 SO 4 are identical. The process is determined by the heterogeneous catalytic auto-decomposition of N 2 H 4 on the catalyst's surface. The platinum catalysed hydrazine decomposition in the nitric acid solutions is a complex process, including heterogeneous catalytic auto-decomposition of N 2 H 4 , reaction of hydrazine with catalytically generated nitrous acid and the catalytic oxidation of hydrazine by nitric acid. The kinetic parameters of these reactions have been determined. The contribution of each reaction in the total process is determined by the liquid phase composition and by the temperature. (authors)

  14. Generalized decompositions of dynamic systems and vector Lyapunov functions

    Science.gov (United States)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  15. In situ XAS of the solvothermal decomposition of dithiocarbamate complexes

    NARCIS (Netherlands)

    Islam, H.-U.; Roffey, A.; Hollingsworth, N.; Catlow, R.; Wolthers, M.; de Leeuw, N.H.; Bras, W.; Sankar, G.; Hogarth, G.

    2012-01-01

    An in situ XAS study of the solvothermal decomposition of iron and nickel dithiocarbamate complexes was performed in order to gain understanding of the decomposition mechanisms. This work has given insight into the steps involved in the decomposition, showing variation in reaction pathways between

  16. Nitrogen addition, not initial phylogenetic diversity, increases litter decomposition by fungal communities.

    Science.gov (United States)

    Amend, Anthony S; Matulich, Kristin L; Martiny, Jennifer B H

    2015-01-01

    Fungi play a critical role in the degradation of organic matter. Because different combinations of fungi result in different rates of decomposition, determining how climate change will affect microbial composition and function is fundamental to predicting future environments. Fungal response to global change is patterned by genetic relatedness, resulting in communities with comparatively low phylogenetic diversity (PD). This may have important implications for the functional capacity of disturbed communities if lineages sensitive to disturbance also contain unique traits important for litter decomposition. Here we tested the relationship between PD and decomposition rates. Leaf litter fungi were isolated from the field and deployed in microcosms as mock communities along a gradient of initial PD, while species richness was held constant. Replicate communities were subject to nitrogen fertilization comparable to anthropogenic deposition levels. Carbon mineralization rates were measured over the course of 66 days. We found that nitrogen fertilization increased cumulative respiration by 24.8%, and that differences in respiration between fertilized and ambient communities diminished over the course of the experiment. Initial PD failed to predict respiration rates or their change in response to nitrogen fertilization, and there was no correlation between community similarity and respiration rates. Last, we detected no phylogenetic signal in the contributions of individual isolates to respiration rates. Our results suggest that the degree to which PD predicts ecosystem function will depend on environmental context.

  17. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.

    2016-01-01

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former

  18. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  19. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  20. Kinetic Studies on Enzyme-Catalyzed Reactions: Oxidation of Glucose, Decomposition of Hydrogen Peroxide and Their Combination

    Science.gov (United States)

    Tao, Zhimin; Raffel, Ryan A.; Souid, Abdul-Kader; Goodisman, Jerry

    2009-01-01

    The kinetics of the glucose oxidase-catalyzed reaction of glucose with O2, which produces gluconic acid and hydrogen peroxide, and the catalase-assisted breakdown of hydrogen peroxide to generate oxygen, have been measured via the rate of O2 depletion or production. The O2 concentrations in air-saturated phosphate-buffered salt solutions were monitored by measuring the decay of phosphorescence from a Pd phosphor in solution; the decay rate was obtained by fitting the tail of the phosphorescence intensity profile to an exponential. For glucose oxidation in the presence of glucose oxidase, the rate constant determined for the rate-limiting step was k = (3.0 ± 0.7) ×104 M−1s−1 at 37°C. For catalase-catalyzed H2O2 breakdown, the reaction order in [H2O2] was somewhat greater than unity at 37°C and well above unity at 25°C, suggesting different temperature dependences of the rate constants for various steps in the reaction. The two reactions were combined in a single experiment: addition of glucose oxidase to glucose-rich cell-free media caused a rapid drop in [O2], and subsequent addition of catalase caused [O2] to rise and then decrease to zero. The best fit of [O2] to a kinetic model is obtained with the rate constants for glucose oxidation and peroxide decomposition equal to 0.116 s−1 and 0.090 s−1 respectively. Cellular respiration in the presence of glucose was found to be three times as rapid as that in glucose-deprived cells. Added NaCN inhibited O2 consumption completely, confirming that oxidation occurred in the cellular mitochondrial respiratory chain. PMID:19348778

  1. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  2. Decomposition of dioxin analogues and ablation study for carbon nanotube

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko

    2002-01-01

    Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)

  3. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Jae-Hyung Yoo; Eung-Ho Kim

    1999-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  4. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  5. Forest products decomposition in municipal solid waste landfills

    International Nuclear Information System (INIS)

    Barlaz, Morton A.

    2006-01-01

    Cellulose and hemicellulose are present in paper and wood products and are the dominant biodegradable polymers in municipal waste. While their conversion to methane in landfills is well documented, there is little information on the rate and extent of decomposition of individual waste components, particularly under field conditions. Such information is important for the landfill carbon balance as methane is a greenhouse gas that may be recovered and converted to a CO 2 -neutral source of energy, while non-degraded cellulose and hemicellulose are sequestered. This paper presents a critical review of research on the decomposition of cellulosic wastes in landfills and identifies additional work that is needed to quantify the ultimate extent of decomposition of individual waste components. Cellulose to lignin ratios as low as 0.01-0.02 have been measured for well decomposed refuse, with corresponding lignin concentrations of over 80% due to the depletion of cellulose and resulting enrichment of lignin. Only a few studies have even tried to address the decomposition of specific waste components at field-scale. Long-term controlled field experiments with supporting laboratory work will be required to measure the ultimate extent of decomposition of individual waste components

  6. The trait contribution to wood decomposition rates of 15 Neotropical tree species.

    Science.gov (United States)

    van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C

    2010-12-01

    The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.

  7. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    Science.gov (United States)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  8. Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films

    International Nuclear Information System (INIS)

    Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.

    2013-01-01

    We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films

  9. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  10. Joint Matrices Decompositions and Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.

    2014-01-01

    Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf

  11. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    Science.gov (United States)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  12. A review of plutonium oxalate decomposition reactions and effects of decomposition temperature on the surface area of the plutonium dioxide product

    International Nuclear Information System (INIS)

    Orr, R.M.; Sims, H.E.; Taylor, R.J.

    2015-01-01

    Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or ‘finishing’ processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO_2 product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles. - Highlights: • Critical review of plutonium oxalate decomposition reactions. • New analysis of relationship between SSA and calcination temperature. • New SEM

  13. Chemistry of decomposition of freshwater wetland sedimentary organic material during ramped pyrolysis

    Science.gov (United States)

    Williams, E. K.; Rosenheim, B. E.

    2011-12-01

    Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary

  14. Litter Decomposition in a Semiarid Dune Grassland: Neutral Effect of Water Supply and Inhibitory Effect of Nitrogen Addition.

    Directory of Open Access Journals (Sweden)

    Yulin Li

    Full Text Available The decomposition of plant material in arid ecosystems is considered to be substantially controlled by water and N availability. The responses of litter decomposition to external N and water, however, remain controversial, and the interactive effects of supplementary N and water also have been largely unexamined.A 3.5-year field experiment with supplementary nitrogen and water was conducted to assess the effects of N and water addition on mass loss and nitrogen release in leaves and fine roots of three dominant plant species (i.e., Artemisia halondendron, Setaria viridis, and Phragmites australis with contrasting substrate chemistry (e.g. N concentration, lignin content in this study in a desertified dune grassland of Inner Mongolia, China. The treatments included N addition, water addition, combination of N and water, and an untreated control. The decomposition rate in both leaves and roots was related to the initial litter N and lignin concentrations of the three species. However, litter quality did not explain the slower mass loss in roots than in leaves in the present study, and thus warrant further research. Nitrogen addition, either alone or in combination with water, significantly inhibited dry mass loss and N release in the leaves and roots of the three species, whereas water input had little effect on the decomposition of leaf litter and fine roots, suggesting that there was no interactive effect of supplementary N and water on litter decomposition in this system. Furthermore, our results clearly indicate that the inhibitory effects of external N on dry mass loss and nitrogen release are relatively strong in high-lignin litter compared with low-lignin litter.These findings suggest that increasing precipitation hardly facilitates ecosystem carbon turnover but atmospheric N deposition can enhance carbon sequestration and nitrogen retention in desertified dune grasslands of northern China. Additionally, litter quality of plant species

  15. Litter Decomposition in a Semiarid Dune Grassland: Neutral Effect of Water Supply and Inhibitory Effect of Nitrogen Addition.

    Science.gov (United States)

    Li, Yulin; Ning, Zhiying; Cui, Duo; Mao, Wei; Bi, Jingdong; Zhao, Xueyong

    2016-01-01

    The decomposition of plant material in arid ecosystems is considered to be substantially controlled by water and N availability. The responses of litter decomposition to external N and water, however, remain controversial, and the interactive effects of supplementary N and water also have been largely unexamined. A 3.5-year field experiment with supplementary nitrogen and water was conducted to assess the effects of N and water addition on mass loss and nitrogen release in leaves and fine roots of three dominant plant species (i.e., Artemisia halondendron, Setaria viridis, and Phragmites australis) with contrasting substrate chemistry (e.g. N concentration, lignin content in this study) in a desertified dune grassland of Inner Mongolia, China. The treatments included N addition, water addition, combination of N and water, and an untreated control. The decomposition rate in both leaves and roots was related to the initial litter N and lignin concentrations of the three species. However, litter quality did not explain the slower mass loss in roots than in leaves in the present study, and thus warrant further research. Nitrogen addition, either alone or in combination with water, significantly inhibited dry mass loss and N release in the leaves and roots of the three species, whereas water input had little effect on the decomposition of leaf litter and fine roots, suggesting that there was no interactive effect of supplementary N and water on litter decomposition in this system. Furthermore, our results clearly indicate that the inhibitory effects of external N on dry mass loss and nitrogen release are relatively strong in high-lignin litter compared with low-lignin litter. These findings suggest that increasing precipitation hardly facilitates ecosystem carbon turnover but atmospheric N deposition can enhance carbon sequestration and nitrogen retention in desertified dune grasslands of northern China. Additionally, litter quality of plant species should be considered

  16. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    Science.gov (United States)

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  17. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    Science.gov (United States)

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  18. Object attributes combine additively in visual search

    OpenAIRE

    Pramod, R. T.; Arun, S. P.

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in in...

  19. Sensitivity analysis of six soil organic matter models applied to the decomposition of animal manures and crop residues

    Directory of Open Access Journals (Sweden)

    Daniele Cavalli

    2016-09-01

    Full Text Available Two features distinguishing soil organic matter simulation models are the type of kinetics used to calculate pool decomposition rates, and the algorithm used to handle the effects of nitrogen (N shortage on carbon (C decomposition. Compared to widely used first-order kinetics, Monod kinetics more realistically represent organic matter decomposition, because they relate decomposition to both substrate and decomposer size. Most models impose a fixed C to N ratio for microbial biomass. When N required by microbial biomass to decompose a given amount of substrate-C is larger than soil available N, carbon decomposition rates are limited proportionally to N deficit (N inhibition hypothesis. Alternatively, C-overflow was proposed as a way of getting rid of excess C, by allocating it to a storage pool of polysaccharides. We built six models to compare the combinations of three decomposition kinetics (first-order, Monod, and reverse Monod, and two ways to simulate the effect of N shortage on C decomposition (N inhibition and C-overflow. We conducted sensitivity analysis to identify model parameters that mostly affected CO2 emissions and soil mineral N during a simulated 189-day laboratory incubation assuming constant water content and temperature. We evaluated model outputs sensitivity at different stages of organic matter decomposition in a soil amended with three inputs of increasing C to N ratio: liquid manure, solid manure, and low-N crop residue. Only few model parameters and their interactions were responsible for consistent variations of CO2 and soil mineral N. These parameters were mostly related to microbial biomass and to the partitioning of applied C among input pools, as well as their decomposition constants. In addition, in models with Monod kinetics, CO2 was also sensitive to a variation of the half-saturation constants. C-overflow enhanced pool decomposition compared to N inhibition hypothesis when N shortage occurred. Accumulated C in the

  20. Wavelet decomposition and neuro-fuzzy hybrid system applied to short-term wind power

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Jimenez, L.A.; Mendoza-Villena, M. [La Rioja Univ., Logrono (Spain). Dept. of Electrical Engineering; Ramirez-Rosado, I.J.; Abebe, B. [Zaragoza Univ., Zaragoza (Spain). Dept. of Electrical Engineering

    2010-03-09

    Wind energy has become increasingly popular as a renewable energy source. However, the integration of wind farms in the electrical power systems presents several problems, including the chaotic fluctuation of wind flow which results in highly varied power generation from a wind farm. An accurate forecast of wind power generation has important consequences in the economic operation of the integrated power system. This paper presented a new statistical short-term wind power forecasting model based on wavelet decomposition and neuro-fuzzy systems optimized with a genetic algorithm. The paper discussed wavelet decomposition; the proposed wind power forecasting model; and computer results. The original time series, the mean electric power generated in a wind farm, was decomposing into wavelet coefficients that were utilized as inputs for the forecasting model. The forecasting results obtained with the final models were compared to those obtained with traditional forecasting models showing a better performance for all the forecasting horizons. 13 refs., 1 tab., 4 figs.