Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
International Nuclear Information System (INIS)
Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
International Nuclear Information System (INIS)
Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
International Nuclear Information System (INIS)
Odry, Nans
2016-01-01
Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the
Spatial domain decomposition for neutron transport problems
International Nuclear Information System (INIS)
Yavuz, M.; Larsen, E.W.
1989-01-01
A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)
Domain decomposition method for solving elliptic problems in unbounded domains
International Nuclear Information System (INIS)
Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.
1991-01-01
Computational aspects of the box domain decomposition (DD) method for solving boundary value problems in an unbounded domain are discussed. A new variant of the DD-method for elliptic problems in unbounded domains is suggested. It is based on the partitioning of an unbounded domain adapted to the given asymptotic decay of an unknown function at infinity. The comparison of computational expenditures is given for boundary integral method and the suggested DD-algorithm. 29 refs.; 2 figs.; 2 tabs
Multilevel domain decomposition for electronic structure calculations
International Nuclear Information System (INIS)
Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.
2007-01-01
We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition methods for fluid dynamics
International Nuclear Information System (INIS)
Clerc, S.
1995-01-01
A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs
Domain decomposition multigrid for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-01-01
A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
International Nuclear Information System (INIS)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-01-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Domain decomposition and multilevel integration for fermions
International Nuclear Information System (INIS)
Ce, Marco; Giusti, Leonardo; Schaefer, Stefan
2016-01-01
The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.
Domain decomposition methods and parallel computing
International Nuclear Information System (INIS)
Meurant, G.
1991-01-01
In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane
2010-01-01
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation
Domain decomposition methods and deflated Krylov subspace iterations
Nabben, R.; Vuik, C.
2006-01-01
The balancing Neumann-Neumann (BNN) and the additive coarse grid correction (BPS) preconditioner are fast and successful preconditioners within domain decomposition methods for solving partial differential equations. For certain elliptic problems these preconditioners lead to condition numbers which
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic
Domain decomposition method for solving the neutron diffusion equation
International Nuclear Information System (INIS)
Coulomb, F.
1989-03-01
The aim of this work is to study methods for solving the neutron diffusion equation; we are interested in methods based on a classical finite element discretization and well suited for use on parallel computers. Domain decomposition methods seem to answer this preoccupation. This study deals with a decomposition of the domain. A theoretical study is carried out for Lagrange finite elements and some examples are given; in the case of mixed dual finite elements, the study is based on examples [fr
A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Mei-qun Jiang; Pei-liang Dai
2006-01-01
A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan; Kolmbauer, Michael; Langer, Ulrich
2010-01-01
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
International Nuclear Information System (INIS)
Haeberlein, F.
2011-01-01
Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)
Remote-sensing image encryption in hybrid domains
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
Domain decomposition methods for the neutron diffusion problem
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2010-01-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)
Domain decomposition methods for core calculations using the MINOS solver
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2007-01-01
Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)
Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems
Directory of Open Access Journals (Sweden)
Pierre Jolivet
2014-01-01
Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.
Domain decomposition methods for solving an image problem
Energy Technology Data Exchange (ETDEWEB)
Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
Europlexus: a domain decomposition method in explicit dynamics
International Nuclear Information System (INIS)
Faucher, V.; Hariddh, Bung; Combescure, A.
2003-01-01
Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)
International Nuclear Information System (INIS)
Berthe, P.M.
2013-01-01
In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr
A TFETI domain decomposition solver for elastoplastic problems
Czech Academy of Sciences Publication Activity Database
Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.
2014-01-01
Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7
Non-linear scalable TFETI domain decomposition based contact algorithm
Czech Academy of Sciences Publication Activity Database
Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.
2010-01-01
Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf
Neutron transport solver parallelization using a Domain Decomposition method
International Nuclear Information System (INIS)
Van Criekingen, S.; Nataf, F.; Have, P.
2008-01-01
A domain decomposition (DD) method is investigated for the parallel solution of the second-order even-parity form of the time-independent Boltzmann transport equation. The spatial discretization is performed using finite elements, and the angular discretization using spherical harmonic expansions (P N method). The main idea developed here is due to P.L. Lions. It consists in having sub-domains exchanging not only interface point flux values, but also interface flux 'derivative' values. (The word 'derivative' is here used with quotes, because in the case considered here, it in fact consists in the Ω.∇ operator, with Ω the angular variable vector and ∇ the spatial gradient operator.) A parameter α is introduced, as proportionality coefficient between point flux and 'derivative' values. This parameter can be tuned - so far heuristically - to optimize the method. (authors)
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
B-spline Collocation with Domain Decomposition Method
International Nuclear Information System (INIS)
Hidayat, M I P; Parman, S; Ariwahjoedi, B
2013-01-01
A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Simulation of two-phase flows by domain decomposition
International Nuclear Information System (INIS)
Dao, T.H.
2013-01-01
This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
Domain decomposition parallel computing for transient two-phase flow of nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.
Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates
Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida
2015-02-01
This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.
International Nuclear Information System (INIS)
Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted
2017-01-01
This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.
Parallel algorithms for nuclear reactor analysis via domain decomposition method
International Nuclear Information System (INIS)
Kim, Yong Hee
1995-02-01
In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when
Domain decomposition techniques for boundary elements application to fluid flow
Brebbia, C A; Skerget, L
2007-01-01
The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
International Nuclear Information System (INIS)
Guerin, P.
2007-12-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Modal Identification from Ambient Responses using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Modal Identification from Ambient Responses Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...
International Nuclear Information System (INIS)
Tang Shaojie; Tang Xiangyang
2012-01-01
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of “salt-and-pepper” noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2003-01-01
The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)
Parallel finite elements with domain decomposition and its pre-processing
International Nuclear Information System (INIS)
Yoshida, A.; Yagawa, G.; Hamada, S.
1993-01-01
This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)
Exterior domain problems and decomposition of tensor fields in weighted Sobolev spaces
Schwarz, Günter
1996-01-01
The Hodge decompOsition is a useful tool for tensor analysis on compact manifolds with boundary. This paper aims at generalising the decomposition to exterior domains G ⊂ IR n. Let L 2a Ω k(G) be the space weighted square integrable differential forms with weight function (1 + |χ|²)a, let d a be the weighted perturbation of the exterior derivative and δ a its adjoint. Then L 2a Ω k(G) splits into the orthogonal sum of the subspaces of the d a-exact forms with vanishi...
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
Collaborative Social Innovation in the Hybrid Domain
Aoyama , Yuko; Parthasarathy , Balaji
2017-01-01
Part 6: Critical Perspectives on ICT and Open Innovation for Development; International audience; What are the institutional attributes that support the use of ICTs for social innovation? Based on the concept of the ‘hybrid domain’, we seek to better understand how various stakeholders with different priorities collaborate, combine economic and social objectives, and reconceptualize multi-stakeholder collaborative governance in the Global South. Using insights from behavioral economics and so...
Energy Technology Data Exchange (ETDEWEB)
Salque, B
1998-07-01
This work deals with the equation of radiosity, this equation describes the transport of light energy through a diffuse medium, its resolution enables us to simulate the presence of light sources. The equation of radiosity is an integral equation who admits a unique solution in realistic cases. The different methods of solving are reviewed. The equation of radiosity can not be formulated as the integral form of a classical partial differential equation, but this work shows that the technique of domain decomposition can be successfully applied to the equation of radiosity if this approach is framed by considerations of physics. This method provides a system of independent equations valid for each sub-domain and whose main parameter is luminance. Some numerical examples give an idea of the convergence of the algorithm. This method is applied to the optimization of the shape of a light reflector.
International Nuclear Information System (INIS)
Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.
2009-01-01
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2005-01-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
International Nuclear Information System (INIS)
Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D.; Mostacci, Domiziano
2011-01-01
This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2010-01-01
Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/ article /pii/S0965997816305737
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Directory of Open Access Journals (Sweden)
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
International Nuclear Information System (INIS)
Azmy, Y.Y.
1997-01-01
The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
Construction of hybrid peptide synthetases by module and domain fusions.
Mootz, H D; Schwarzer, D; Marahiel, M A
2000-05-23
Nonribosomal peptide synthetases are modular enzymes that assemble peptides of diverse structures and important biological activities. Their modular organization provides a great potential for the rational design of novel compounds by recombination of the biosynthetic genes. Here we describe the extension of a dimodular system to trimodular ones based on whole-module fusion. The recombinant hybrid enzymes were purified to monitor product assembly in vitro. We started from the first two modules of tyrocidine synthetase, which catalyze the formation of the dipeptide dPhe-Pro, to construct such hybrid systems. Fusion of the second, proline-specific module with the ninth and tenth modules of the tyrocidine synthetases, specific for ornithine and leucine, respectively, resulted in dimodular hybrid enzymes exhibiting the combined substrate specificities. The thioesterase domain was fused to the terminal module. Upon incubation of these dimodular enzymes with the first tyrocidine module, TycA, incorporating dPhe, the predicted tripeptides dPhe-Pro-Orn and dPhe-Pro-Leu were obtained at rates of 0.15 min(-1) and 2.1 min(-1). The internal thioesterase domain was necessary and sufficient to release the products from the hybrid enzymes and thereby facilitate a catalytic turnover. Our approach of whole-module fusion is based on an improved definition of the fusion sites and overcomes the recently discovered editing function of the intrinsic condensation domains. The stepwise construction of hybrid peptide synthetases from catalytic subunits reinforces the inherent potential for the synthesis of novel, designed peptides.
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
Directory of Open Access Journals (Sweden)
Changyun Liu
2017-01-01
Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.
An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods
International Nuclear Information System (INIS)
Zhang Hongbo; Wu Hongchun; Cao Liangzhi
2011-01-01
Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
International Nuclear Information System (INIS)
Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit
2017-01-01
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
International Nuclear Information System (INIS)
Fischer, J.W.; Azmy, Y.Y.
2003-01-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).
Chen, Yuehua; Jin, Guoyong; Liu, Zhigang
2017-05-01
This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner
International Nuclear Information System (INIS)
Subber, Waad; Sarkar, Abhijit
2012-01-01
For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan; Efendiev, Yalchin
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
International Nuclear Information System (INIS)
Son, Youn-Suk; Kim, Ki-Joon; Kim, Ji-Yong; Kim, Jo-Chun
2010-01-01
We applied a hybrid technique to assess the decomposition characteristics of ethylbenzene and toluene that annexed the catalyst technique with existing electron beam (EB) technology. The removal efficiency of ethylbenzene in the EB-catalyst hybrid turned out to be 30% greater than that of EB-only treatment. We concluded that ethylbenzene was decomposed more easily than toluene by EB irradiation. We compared the independent effects of the EB-catalyst hybrid and catalyst-only methods, and observed that the efficiency of the EB-catalyst hybrid demonstrated approximately 6% improvement for decomposing toluene and 20% improvement for decomposing ethylbenzene. The G-values for ethylbenzene increased with initial concentration and reactor type: for example, the G-values by reactor type at 2800 ppmC were 7.5-10.9 (EB-only) and 12.9-25.7 (EB-catalyst hybrid). We also observed a significant decrease in by-products as well as in the removal efficiencies associated with the EB-catalyst hybrid technique.
Son, Youn-Suk; Kim, Ki-Joon; Kim, Ji-Yong; Kim, Jo-Chun
2010-12-01
We applied a hybrid technique to assess the decomposition characteristics of ethylbenzene and toluene that annexed the catalyst technique with existing electron beam (EB) technology. The removal efficiency of ethylbenzene in the EB-catalyst hybrid turned out to be 30% greater than that of EB-only treatment. We concluded that ethylbenzene was decomposed more easily than toluene by EB irradiation. We compared the independent effects of the EB-catalyst hybrid and catalyst-only methods, and observed that the efficiency of the EB-catalyst hybrid demonstrated approximately 6% improvement for decomposing toluene and 20% improvement for decomposing ethylbenzene. The G-values for ethylbenzene increased with initial concentration and reactor type: for example, the G-values by reactor type at 2800 ppmC were 7.5-10.9 (EB-only) and 12.9-25.7 (EB-catalyst hybrid). We also observed a significant decrease in by-products as well as in the removal efficiencies associated with the EB-catalyst hybrid technique.
Two-phase flow steam generator simulations on parallel computers using domain decomposition method
International Nuclear Information System (INIS)
Belliard, M.
2003-01-01
Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)
Hybrid subgroup decomposition method for solving fine-group eigenvalue transport problems
International Nuclear Information System (INIS)
Yasseri, Saam; Rahnema, Farzad
2014-01-01
Highlights: • An acceleration technique for solving fine-group eigenvalue transport problems. • Coarse-group quasi transport theory to solve coarse-group eigenvalue transport problems. • Consistent and inconsistent formulations for coarse-group quasi transport theory. • Computational efficiency amplified by a factor of 2 using hybrid SGD for 1D BWR problem. - Abstract: In this paper, a new hybrid method for solving fine-group eigenvalue transport problems is developed. This method extends the subgroup decomposition method to efficiently couple a new coarse-group quasi transport theory with a set of fixed-source transport decomposition sweeps to obtain the fine-group transport solution. The advantages of the quasi transport theory are its high accuracy, straight-forward implementation and numerical stability. The hybrid method is analyzed for a 1D benchmark problem characteristic of boiling water reactors (BWR). It is shown that the method reproduces the fine-group transport solution with high accuracy while increasing the computational efficiency up to 12 times compared to direct fine-group transport calculations
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Domain decomposition method for dynamic faulting under slip-dependent friction
International Nuclear Information System (INIS)
Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie
2004-01-01
The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)
Energy Technology Data Exchange (ETDEWEB)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
A non overlapping parallel domain decomposition method applied to the simplified transport equations
International Nuclear Information System (INIS)
Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.
2009-01-01
A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition
Directory of Open Access Journals (Sweden)
Chunfu Wu
2015-01-01
Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.
A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Bo-Ao Xu
2016-01-01
Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Energy Technology Data Exchange (ETDEWEB)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Schultz, A.
2010-12-01
describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.
Energy Technology Data Exchange (ETDEWEB)
Saas, L.
2004-05-01
This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition
International Nuclear Information System (INIS)
Clerc, S.
1998-01-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
A new solar power output prediction based on hybrid forecast engine and decomposition model.
Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando
2018-06-12
Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Yuqi Dong
2016-12-01
Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
International Nuclear Information System (INIS)
Li, Chengwei; Zhan, Liwei
2015-01-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)
International Nuclear Information System (INIS)
Nikula, Suvi; Manninen, Sirkku; Vapaavuori, Elina; Pulkkinen, Pertti
2011-01-01
Road traffic contributes considerably to ground-level air pollution and is therefore likely to affect roadside ecosystems. Differences in growth and leaf traits among 13 hybrid aspen (Populus tremula x P. tremuloides) clones were studied in relation to distance from a motorway. The trees sampled were growing 15 and 30 m from a motorway and at a background rural site in southern Finland. Litter decomposition was also measured at both the roadside and rural sites. Height and diameter growth rate and specific leaf area were lowest, and epicuticular wax amount highest in trees growing 15 m from the motorway. Although no significant distance x clone interactions were detected, clone-based analyses indicated differences in genotypic responses to motorway proximity. Leaf N concentration did not differ with distance from the motorway for any of the clones. Leaf litter decomposition was only temporarily retarded in the roadside environment, suggesting minor effects on nutrient cycling. - Highlights: → Roadside hybrid aspen displayed xeromorphic leaf traits and reduction in growth rate. → These responses were limited to trees close to the motorway and only to some clones. → Leaf litter decomposition was only temporarily retarded in the roadside environment. - Hybrid aspen had more xeromorphic leaves, displayed reduced growth, and showed retarded litter decomposition at an early stage in the roadside environment.
International Nuclear Information System (INIS)
Zerr, R.J.; Azmy, Y.Y.
2010-01-01
A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Willems, Joerg
2012-01-01
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract
Energy Technology Data Exchange (ETDEWEB)
El-Sayed, A.M.A. [Faculty of Science University of Alexandria (Egypt)]. E-mail: amasyed@hotmail.com; Gaber, M. [Faculty of Education Al-Arish, Suez Canal University (Egypt)]. E-mail: mghf408@hotmail.com
2006-11-20
The Adomian decomposition method has been successively used to find the explicit and numerical solutions of the time fractional partial differential equations. A different examples of special interest with fractional time and space derivatives of order {alpha}, 0<{alpha}=<1 are considered and solved by means of Adomian decomposition method. The behaviour of Adomian solutions and the effects of different values of {alpha} are shown graphically for some examples.
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
A hybrid absorbing boundary condition for frequency-domain finite-difference modelling
International Nuclear Information System (INIS)
Ren, Zhiming; Liu, Yang
2013-01-01
Liu and Sen (2010 Geophysics 75 A1–6; 2012 Geophys. Prospect. 60 1114–32) proposed an efficient hybrid scheme to significantly absorb boundary reflections for acoustic and elastic wave modelling in the time domain. In this paper, we extend the hybrid absorbing boundary condition (ABC) into the frequency domain and develop specific strategies for regular-grid and staggered-grid modelling, respectively. Numerical modelling tests of acoustic, visco-acoustic, elastic and vertically transversely isotropic (VTI) equations show significant absorptions for frequency-domain modelling. The modelling results of the Marmousi model and the salt model also demonstrate the effectiveness of the hybrid ABC. For elastic modelling, the hybrid Higdon ABC and the hybrid Clayton and Engquist (CE) ABC are implemented, respectively. Numerical simulations show that the hybrid Higdon ABC gets better absorption than the hybrid CE ABC, especially for S-waves. We further compare the hybrid ABC with the classical perfectly matched layer (PML). Results show that the two ABCs cost the same computation time and memory space for the same absorption width. However, the hybrid ABC is more effective than the PML for the same small absorption width and the absorption effects of the two ABCs gradually become similar when the absorption width is increased. (paper)
International Nuclear Information System (INIS)
Chiba, Gou; Tsuji, Masashi; Shimazu, Yoichiro
2001-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) that was developed to solve a two-dimensional neutron diffusion equation has been modified to deal with three-dimensional problems. In the HDD-BEM, the domain is decomposed into homogeneous regions. The boundary conditions on the common inner boundaries between decomposed regions and the neutron multiplication factor are initially assumed. With these assumptions, the neutron diffusion equations defined in decomposed homogeneous regions can be solved respectively by applying the boundary element method. This part corresponds to the 'lower level' calculations. At the 'higher level' calculations, the assumed values, the inner boundary conditions and the neutron multiplication factor, are modified so as to satisfy the continuity conditions for the neutron flux and the neutron currents on the inner boundaries. These procedures of the lower and higher levels are executed alternately and iteratively until the continuity conditions are satisfied within a convergence tolerance. With the hierarchical domain decomposition, it is possible to deal with problems composing a large number of regions, something that has been difficult with the conventional BEM. In this paper, it is showed that a three-dimensional problem even with 722 regions can be solved with a fine accuracy and an acceptable computation time. (author)
Construct solitary solutions of discrete hybrid equation by Adomian Decomposition Method
International Nuclear Information System (INIS)
Wang Zhen; Zhang Hongqing
2009-01-01
In this paper, we apply the Adomian Decomposition Method to solving the differential-difference equations. A typical example is applied to illustrate the validity and the great potential of the Adomian Decomposition Method in solving differential-difference equation. Kink shaped solitary solution and Bell shaped solitary solution are presented. Comparisons are made between the results of the proposed method and exact solutions. The results show that the Adomian Decomposition Method is an attractive method in solving the differential-difference equations.
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Energy Technology Data Exchange (ETDEWEB)
Han, Sang-Bo [Industry Applications Research Laboratory, Korea Electrotechnology Research Institute, Changwon, Kyeongnam (Korea, Republic of); Oda, Tetsuji [Department of Electrical Engineering, The University of Tokyo, Tokyo 113-8656 (Japan)
2007-05-15
The hybrid barrier discharge plasma process combined with ozone decomposition catalysts was studied experimentally for decomposing dilute trichloroethylene (TCE). Based on the fundamental experiment for catalytic activities on ozone decomposition, MnO{sub 2} was selected for application in the main experiments for its higher catalytic abilities than other metal oxides. A lower initial TCE concentration existed in the working gas; the larger ozone concentration was generated from the barrier discharge plasma treatment. Near complete decomposition of dichloro-acetylchloride (DCAC) into Cl{sub 2} and CO{sub x} was observed for an initial TCE concentration of less than 250 ppm. C=C {pi} bond cleavage in TCE gave a carbon single bond of DCAC through oxidation reaction during the barrier discharge plasma treatment. Those DCAC were easily broken in the subsequent catalytic reaction. While changing oxygen concentration in working gas, oxygen radicals in the plasma space strongly reacted with precursors of DCAC compared with those of trichloro-acetaldehyde. A chlorine radical chain reaction is considered as a plausible decomposition mechanism in the barrier discharge plasma treatment. The potential energy of oxygen radicals at the surface of the catalyst is considered as an important factor in causing reactive chemical reactions.
Hybrid time/frequency domain modeling of nonlinear components
DEFF Research Database (Denmark)
Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth
2007-01-01
This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...
Li, Ping; Shi, Yifei; Jiang, Lijun; Bagci, Hakan
2014-01-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer's shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Li, Ping
2014-05-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens\\' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer\\'s shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Abrokwah, K.; O'Reilly, A. M.
2017-12-01
Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.
An efficient domain decomposition strategy for wave loads on surface piercing circular cylinders
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
A fully nonlinear domain decomposed solver is proposed for efficient computations of wave loads on surface piercing structures in the time domain. A fully nonlinear potential flow solver was combined with a fully nonlinear Navier–Stokes/VOF solver via generalized coupling zones of arbitrary shape....... Sensitivity tests of the extent of the inner Navier–Stokes/VOF domain were carried out. Numerical computations of wave loads on surface piercing circular cylinders at intermediate water depths are presented. Four different test cases of increasing complexity were considered; 1) weakly nonlinear regular waves...
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting
Directory of Open Access Journals (Sweden)
Chaolong Jia
2015-01-01
Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.
International Nuclear Information System (INIS)
Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei
2015-01-01
The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
Directory of Open Access Journals (Sweden)
Jesús García
2012-01-01
Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.
Modal Identification of Output-Only Systems using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
International Nuclear Information System (INIS)
Ragusa, J.C.
2003-01-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Entity recognition in the biomedical domain using a hybrid approach.
Basaldella, Marco; Furrer, Lenz; Tasso, Carlo; Rinaldi, Fabio
2017-11-09
This article describes a high-recall, high-precision approach for the extraction of biomedical entities from scientific articles. The approach uses a two-stage pipeline, combining a dictionary-based entity recognizer with a machine-learning classifier. First, the OGER entity recognizer, which has a bias towards high recall, annotates the terms that appear in selected domain ontologies. Subsequently, the Distiller framework uses this information as a feature for a machine learning algorithm to select the relevant entities only. For this step, we compare two different supervised machine-learning algorithms: Conditional Random Fields and Neural Networks. In an in-domain evaluation using the CRAFT corpus, we test the performance of the combined systems when recognizing chemicals, cell types, cellular components, biological processes, molecular functions, organisms, proteins, and biological sequences. Our best system combines dictionary-based candidate generation with Neural-Network-based filtering. It achieves an overall precision of 86% at a recall of 60% on the named entity recognition task, and a precision of 51% at a recall of 49% on the concept recognition task. These results are to our knowledge the best reported so far in this particular task.
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Energy Technology Data Exchange (ETDEWEB)
Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
International Nuclear Information System (INIS)
Kuo, V.
2016-01-01
Full text: The European Qualifications Framework categorizes learning objectives into three qualifiers “knowledge”, “skills”, and “competences” (KSCs) to help improve the comparability between different fields and disciplines. However, the management of KSCs remains a great challenge given their semantic fuzziness. Similar texts may describe different concepts and different texts may describe similar concepts among different domains. This is difficult for the indexing, searching and matching of semantically similar KSCs within an information system, to facilitate transfer and mobility of KSCs. We present a working example using a semantic inference method known as Latent Semantic Analysis, employing a matrix operation called Singular Value Decomposition, which have been shown to infer semantic associations within unstructured textual data comparable to that of human interpretations. In our example, a few natural language text passages representing KSCs in the nuclear sector are used to demonstrate the capabilities of the system. It can be shown that LSA is able to infer latent semantic associations between texts, and cluster and match separate text passages semantically based on these associations. We propose this methodology for modelling existing natural language KSCs in the nuclear domain so they can be semantically queried, retrieved and filtered upon request. (author
A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
International Nuclear Information System (INIS)
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-01-01
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods
International Nuclear Information System (INIS)
Wang, Yamin; Wu, Lei
2016-01-01
This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to
Energy Technology Data Exchange (ETDEWEB)
Clement, F.; Vodicka, A.; Weis, P. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Martin, V. [Institut National de Recherches Agronomiques (INRA), 92 - Chetenay Malabry (France); Di Cosmo, R. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Paris-7 Univ., 75 (France)
2003-07-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
International Nuclear Information System (INIS)
Clement, F.; Vodicka, A.; Weis, P.; Martin, V.; Di Cosmo, R.
2003-01-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
Directory of Open Access Journals (Sweden)
Jiani Heng
2016-01-01
Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.
Wavelet decomposition and neuro-fuzzy hybrid system applied to short-term wind power
Energy Technology Data Exchange (ETDEWEB)
Fernandez-Jimenez, L.A.; Mendoza-Villena, M. [La Rioja Univ., Logrono (Spain). Dept. of Electrical Engineering; Ramirez-Rosado, I.J.; Abebe, B. [Zaragoza Univ., Zaragoza (Spain). Dept. of Electrical Engineering
2010-03-09
Wind energy has become increasingly popular as a renewable energy source. However, the integration of wind farms in the electrical power systems presents several problems, including the chaotic fluctuation of wind flow which results in highly varied power generation from a wind farm. An accurate forecast of wind power generation has important consequences in the economic operation of the integrated power system. This paper presented a new statistical short-term wind power forecasting model based on wavelet decomposition and neuro-fuzzy systems optimized with a genetic algorithm. The paper discussed wavelet decomposition; the proposed wind power forecasting model; and computer results. The original time series, the mean electric power generated in a wind farm, was decomposing into wavelet coefficients that were utilized as inputs for the forecasting model. The forecasting results obtained with the final models were compared to those obtained with traditional forecasting models showing a better performance for all the forecasting horizons. 13 refs., 1 tab., 4 figs.
Ganguly, Anshuman; Krishna Vemuri, Sri Hari; Panahi, Issa
2014-01-01
This paper presents a cost-effective adaptive feedback Active Noise Control (FANC) method for controlling functional Magnetic Resonance Imaging (fMRI) acoustic noise by decomposing it into dominant periodic components and residual random components. Periodicity of fMRI acoustic noise is exploited by using linear prediction (LP) filtering to achieve signal decomposition. A hybrid combination of adaptive filters-Recursive Least Squares (RLS) and Normalized Least Mean Squares (NLMS) are then used to effectively control each component separately. Performance of the proposed FANC system is analyzed and Noise attenuation levels (NAL) up to 32.27 dB obtained by simulation are presented which confirm the effectiveness of the proposed FANC method.
Energy Technology Data Exchange (ETDEWEB)
Hoekstra, R.
2003-10-01
Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)
International Nuclear Information System (INIS)
Hoekstra, R.
2003-01-01
Economic processes generate a variety of material flows, which cause resource problems through the depletion of natural resources and environmental issues due to the emission of pollutants. This thesis presents an analytical method to study the relationship between the monetary economy and the 'physical economy'. In particular, this method can assess the impact of structural change in the economy on physical throughput. The starting point for the approach is the development of an elaborate version of the physical input-output table (PIOT), which acts as an economic-environmental accounting framework for the physical economy. In the empirical application, hybrid-unit input-output (I/O) tables, which combine physical and monetary information, are constructed for iron and steel, and plastic products for the Netherlands for the years 1990 and 1997. The impact of structural change on material flows is analyzed using Structural Decomposition Analysis (SDA), whic specifies effects such as sectoral shifts, technological change, and alterations in consumer spending and international trade patterns. The study thoroughly reviews the application of SDA to environmental issues, compares the method with other decomposition methods, and develops new mathematical specifications. An SDA is performed using the hybrid-unit input-output tables for the Netherlands. The results are subsequently used in novel forecasting and backcasting scenario analyses for the period 1997-2030. The results show that dematerialization of iron and steel, and plastics, has generally not occurred in the recent past (1990-1997), and will not occur, under a wide variety of scenario assumptions, in the future (1997-2030)
Zampini, Stefano; Tu, Xuemin
2017-01-01
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
International Nuclear Information System (INIS)
Ogino, Masao
2016-01-01
Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Hybrid Fourier pseudospectral/discontinuous Galerkin time-domain method for wave propagation
Pagán Muñoz, Raúl; Hornikx, Maarten
2017-11-01
The Fourier Pseudospectral time-domain (Fourier PSTD) method was shown to be an efficient way of modelling acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly staircase-like boundary shapes. This paper presents a hybrid approach to solve the LEE, coupling Fourier PSTD with a nodal Discontinuous Galerkin (DG) method. DG exhibits almost no restrictions with respect to geometrical complexity or boundary conditions. The aim of this novel method is to allow the computation of complex geometries and to be a step towards the implementation of frequency dependent boundary conditions by using the benefits of DG at the boundaries, while keeping the efficient Fourier PSTD in the bulk of the domain. The hybridization approach is based on conformal meshes to avoid spatial interpolation of the DG solutions when transferring values from DG to Fourier PSTD, while the data transfer from Fourier PSTD to DG is done utilizing spectral interpolation of the Fourier PSTD solutions. The accuracy of the hybrid approach is presented for one- and two-dimensional acoustic problems and the main sources of error are investigated. It is concluded that the hybrid methodology does not introduce significant errors compared to the Fourier PSTD stand-alone solver. An example of a cylinder scattering problem is presented and accurate results have been obtained when using the proposed approach. Finally, no instabilities were found during long-time calculation using the current hybrid methodology on a two-dimensional domain.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E
2004-12-15
A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.
Hybrid diffusion-P3 equation in N-layered turbid media: steady-state domain.
Shi, Zhenzhi; Zhao, Huijuan; Xu, Kexin
2011-10-01
This paper discusses light propagation in N-layered turbid media. The hybrid diffusion-P3 equation is solved for an N-layered finite or infinite turbid medium in the steady-state domain for one point source using the extrapolated boundary condition. The Fourier transform formalism is applied to derive the analytical solutions of the fluence rate in Fourier space. Two inverse Fourier transform methods are developed to calculate the fluence rate in real space. In addition, the solutions of the hybrid diffusion-P3 equation are compared to the solutions of the diffusion equation and the Monte Carlo simulation. For the case of small absorption coefficients, the solutions of the N-layered diffusion equation and hybrid diffusion-P3 equation are almost equivalent and are in agreement with the Monte Carlo simulation. For the case of large absorption coefficients, the model of the hybrid diffusion-P3 equation is more precise than that of the diffusion equation. In conclusion, the model of the hybrid diffusion-P3 equation can replace the diffusion equation for modeling light propagation in the N-layered turbid media for a wide range of absorption coefficients.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Energy Technology Data Exchange (ETDEWEB)
Mehboob, Shoaib, E-mail: smehboob@pieas.edu.pk [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Mehmood, Mazhar [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmed, Mushtaq [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Ahmad, Jamil; Tanvir, Muhammad Tauseef [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmad, Izhar [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Hassan, Syed Mujtaba ul [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan)
2017-04-15
The objective of this work is to study the changes in optical and dielectric properties with the transformation of aluminum ammonium carbonate hydroxide (AACH) to α-alumina, using terahertz time domain spectroscopy (THz-TDS). The nanostructured AACH was synthesized by hydrothermal treatment of the raw chemicals at 140 °C for 12 h. This AACH was then calcined at different temperatures. The AACH was decomposed to amorphous phase at 400 °C and transformed to δ* + α-alumina at 1000 °C. Finally, the crystalline α-alumina was achieved at 1200 °C. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy were employed to identify the phases formed after calcination. The morphology of samples was studied using scanning electron microscopy (SEM), which revealed that the AACH sample had rod-like morphology which was retained in the calcined samples. THz-TDS measurements showed that AACH had lowest refractive index in the frequency range of measurements. The refractive index at 0.1 THZ increased from 2.41 for AACH to 2.58 for the amorphous phase and to 2.87 for the crystalline α-alumina. The real part of complex permittivity increased with the calcination temperature. Further, the absorption coefficient was highest for AACH, which reduced with calcination temperature. The amorphous phase had higher absorption coefficient than the crystalline alumina. - Highlights: • Aluminum oxide nanostructures were obtained by thermal decomposition of AACH. • Crystalline phases of aluminum oxide have higher refractive index than that of amorphous phase. • The removal of heavier ionic species led to the lower absorption of THz radiations.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
International Nuclear Information System (INIS)
Kuperman, Alon; Aharon, Ilan; Kara, Avi; Malki, Shalev
2011-01-01
Highlights: → Passive battery-ultracapacitor hybrids are examined. → Frequency domain analysis is employed. → The ultracapacitor branch operates as a low-pass filter for the battery. → The battery supplies the average load demand. → Design requirements are discussed. - Abstract: A Fourier-based analysis of passive battery-ultracapacitor hybrid sources is introduced in the manuscript. The approach is first introduced for a general load, and then is followed by a study for a case of periodic pulsed current load. It is shown that the ultracapacitor branch is perceived by the battery as a low-pass filter, which absorbs the majority of the high frequency harmonic current and letting the battery to supply the average load demand in addition to the small part of dynamic current. Design requirements influence on the ultracapacitor capacitance and internal resistance choice are quantitatively discussed. The theory is enforced by simulation and experimental results, showing an excellent agreement.
Impact of Antenna Placement on Frequency Domain Adaptive Antenna Array in Hybrid FRF Cellular System
Directory of Open Access Journals (Sweden)
Sri Maldia Hari Asti
2012-01-01
Full Text Available Frequency domain adaptive antenna array (FDAAA is an effective method to suppress interference caused by frequency selective fading and multiple-access interference (MAI in single-carrier (SC transmission. However, the performance of FDAAA receiver will be affected by the antenna placement parameters such as antenna separation and spread of angle of arrival (AOA. On the other hand, hybrid frequency reuse can be adopted in cellular system to improve the cellular capacity. However, optimal frequency reuse factor (FRF depends on the channel propagation and transceiver scheme as well. In this paper, we analyze the impact of antenna separation and AOA spread on FDAAA receiver and optimize the cellular capacity by using hybrid FRF.
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
International Nuclear Information System (INIS)
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Caines, P. E.
1999-01-01
The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.
Strain-controlled magnetic domain wall propagation in hybrid piezoelectric/ferromagnetic structures.
Lei, Na; Devolder, Thibaut; Agnus, Guillaume; Aubert, Pascal; Daniel, Laurent; Kim, Joo-Von; Zhao, Weisheng; Trypiniotis, Theodossis; Cowburn, Russell P; Chappert, Claude; Ravelosona, Dafiné; Lecoeur, Philippe
2013-01-01
The control of magnetic order in nanoscale devices underpins many proposals for integrating spintronics concepts into conventional electronics. A key challenge lies in finding an energy-efficient means of control, as power dissipation remains an important factor limiting future miniaturization of integrated circuits. One promising approach involves magnetoelectric coupling in magnetostrictive/piezoelectric systems, where induced strains can bear directly on the magnetic anisotropy. While such processes have been demonstrated in several multiferroic heterostructures, the incorporation of such complex materials into practical geometries has been lacking. Here we demonstrate the possibility of generating sizeable anisotropy changes, through induced strains driven by applied electric fields, in hybrid piezoelectric/spin-valve nanowires. By combining magneto-optical Kerr effect and magnetoresistance measurements, we show that domain wall propagation fields can be doubled under locally applied strains. These results highlight the prospect of constructing low-power domain wall gates for magnetic logic devices.
Yu, Yang; Zhao, Zhigang; Shi, Yanrong; Tian, Hua; Liu, Linglong; Bian, Xiaofeng; Xu, Yang; Zheng, Xiaoming; Gan, Lu; Shen, Yumin; Wang, Chaolong; Yu, Xiaowen; Wang, Chunming; Zhang, Xin; Guo, Xiuping; Wang, Jiulin; Ikehashi, Hiroshi; Jiang, Ling; Wan, Jianmin
2016-07-01
Intersubspecific hybrid sterility is a common form of reproductive isolation in rice (Oryza sativa L.), which significantly hampers the utilization of heterosis between indica and japonica varieties. Here, we elucidated the mechanism of S7, which specially causes Aus-japonica/indica hybrid female sterility, through cytological and genetic analysis, map-based cloning, and transformation experiments. Abnormal positioning of polar nuclei and smaller embryo sac were observed in F1 compared with male and female parents. Female gametes carrying S7(cp) and S7(i) were aborted in S7(ai)/S7(cp) and S7(ai)/S7(i), respectively, whereas they were normal in both N22 and Dular possessing a neutral allele, S7(n) S7 was fine mapped to a 139-kb region in the centromere region on chromosome 7, where the recombination was remarkably suppressed due to aggregation of retrotransposons. Among 16 putative open reading frames (ORFs) localized in the mapping region, ORF3 encoding a tetratricopeptide repeat domain containing protein was highly expressed in the pistil. Transformation experiments demonstrated that ORF3 is the candidate gene: downregulated expression of ORF3 restored spikelet fertility and eliminated absolutely preferential transmission of S7(ai) in heterozygote S7(ai)/S7(cp); sterility occurred in the transformants Cpslo17-S7(ai) Our results may provide implications for overcoming hybrid embryo sac sterility in intersubspecific hybrid rice and utilization of hybrid heterosis for cultivated rice improvement. Copyright © 2016 by the Genetics Society of America.
Cameron, Delroy; Sheth, Amit P; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A
2014-12-01
While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and "intelligible constructs" not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving
Zhang, Yu; Pan, Peng; Gong, Runhua; Wang, Tao; Xue, Weichen
2017-10-01
An online hybrid test was carried out on a 40-story 120-m high concrete shear wall structure. The structure was divided into two substructures whereby a physical model of the bottom three stories was tested in the laboratory and the upper 37 stories were simulated numerically using ABAQUS. An overlapping domain method was employed for the bottom three stories to ensure the validity of the boundary conditions of the superstructure. Mixed control was adopted in the test. Displacement control was used to apply the horizontal displacement, while two controlled force actuators were applied to simulate the overturning moment, which is very large and cannot be ignored in the substructure hybrid test of high-rise buildings. A series of tests with earthquake sources of sequentially increasing intensities were carried out. The test results indicate that the proposed hybrid test method is a solution to reproduce the seismic response of high-rise concrete shear wall buildings. The seismic performance of the tested precast high-rise building satisfies the requirements of the Chinese seismic design code.
Directory of Open Access Journals (Sweden)
Xike Zhang
2018-05-01
Full Text Available Daily land surface temperature (LST forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD coupled with Machine Learning (ML algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs and a single residue item. Then, the Partial Autocorrelation Function (PACF is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE, Mean Absolute Error (MAE, Mean Absolute Percentage Error (MAPE, Root Mean Square Error (RMSE, Pearson Correlation Coefficient (CC and Nash-Sutcliffe Coefficient of Efficiency (NSCE. To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN, LSTM and Empirical Mode Decomposition (EMD coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other
Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei
2018-05-21
Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five
Directory of Open Access Journals (Sweden)
Daniel Marcsa
2015-01-01
Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
International Nuclear Information System (INIS)
Gorensek, M. B.; Summers, W. A.; Lahoda, E. J.; Bolthrunis, C. O.; Greyvenstein, R.
2008-01-01
The Hybrid Sulfur (HyS) Process is being developed to produce hydrogen by water-splitting using heat from advanced nuclear reactors. It has the potential for high efficiency and competitive hydrogen production cost, and has been demonstrated at a laboratory scale. As a two-step process, the HyS is one of the simplest thermochemical cycles. The sulfuric acid decomposition reaction is common to all sulfur cycles, including the Sulfur-Iodine (SI) cycle. What distinguishes the HyS Process from the other sulfur cycles is the use of sulfur dioxide (SO 2 ) to depolarize the anode of a water electrolyzer. The two critical HyS Process components are the SO 2 - depolarized electrolyzer (SDE), and the high-temperature decomposition reactor. A proton exchange membrane (PEM)- type SDE and a silicon carbide bayonet-type high-temperature decomposition reactor are being developed for DOE's Nuclear Hydrogen Initiative (NHI) by Savannah River National Laboratory (SRNL) and by Sandia National Laboratories (SNL), respectively. The ultimate goal of the NHI-sponsored work is to couple the SDE and the reactor in an integrated laboratory scale experiment to prove the technical readiness of the HyS cycle for the NGNP demonstration. This paper describes the flowsheet that is being prepared to combine these two components into a viable process and presents the latest performance projections and economics for a HyS Process coupled to a PBMR heat source. The basic flowsheet for this process has been described elsewhere [4]. It requires an acid concentration section because the SDE product, which is limited to no more than 50% H 2 SO 4 by cell voltage considerations, is too dilute to be fed directly to the bayonet, which needs at least 65% H 2 SO 4 in the feed for acceptable performance. Optimization involved trade-offs between decomposition reaction and acid concentration heat requirements. The PBMR heat source can split its heat output between the decomposition reaction and either steam
Directory of Open Access Journals (Sweden)
A. Becker
2007-06-01
Full Text Available In this paper a hybrid method combining the Time-Domain Method of Moments (TD-MoM, the Time-Domain Uniform Theory of Diffraction (TD-UTD and the Finite-Difference Time-Domain Method (FDTD is presented. When applying this new hybrid method, thin-wire antennas are modeled with the TD-MoM, inhomogeneous bodies are modelled with the FDTD and large perfectly conducting plates are modelled with the TD-UTD. All inhomogeneous bodies are enclosed in a so-called FDTD-volume and the thin-wire antennas can be embedded into this volume or can lie outside. The latter avoids the simulation of white space between antennas and inhomogeneous bodies. If the antennas are positioned into the FDTD-volume, their discretization does not need to agree with the grid of the FDTD. By using the TD-UTD large perfectly conducting plates can be considered efficiently in the solution-procedure. Thus this hybrid method allows time-domain simulations of problems including very different classes of objects, applying the respective most appropriate numerical techniques to every object.
International Nuclear Information System (INIS)
Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L
2015-01-01
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR
A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.
Xue, Xiaoming; Zhou, Jianzhong
2017-01-01
To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Umegaki, Kikuo; Miki, Kazuyoshi
1990-01-01
A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)
Robu, Adrian C; Popescu, Laurentiu; Munteanu, Cristian V A; Seidler, Daniela G; Zamfir, Alina D
2015-09-15
In the central nervous system, chondroitin/dermatan sulfate (CS/DS) glycosaminoglycans (GAGs) modulate neurotrophic effects and glial cell maturation during brain development. Previous reports revealed that GAG composition could be responsible for CS/DS activities in brain. In this work, for the structural characterization of DS- and CS-rich domains in hybrid GAG chains extracted from neural tissue, we have developed an advanced approach based on high-resolution mass spectrometry (MS) using nanoelectrospray ionization Orbitrap in the negative ion mode. Our high-resolution MS and multistage MS approach was developed and applied to hexasaccharides obtained from 4- and 14-week-old mouse brains by GAG digestion with chondroitin B and in parallel with AC I lyase. The expression of DS- and CS-rich domains in the two tissues was assessed comparatively. The analyses indicated an age-related structural variability of the CS/DS motifs. The older brain was found to contain more structures and a higher sulfation of DS-rich regions, whereas the younger brain was found to be characterized by a higher sulfation of CS-rich regions. By multistage MS using collision-induced dissociation, we also demonstrated the incidence in mouse brain of an atypical [4,5-Δ-GlcAGalNAc(IdoAGalNAc)2], presenting a bisulfated CS disaccharide formed by 3-O-sulfate-4,5-Δ-GlcA and 6-O-sulfate-GalNAc moieties. Copyright © 2015 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Lenain, Roland
2015-01-01
This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems' size ranging from color-sets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each sub-domain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the sub-domains). The multi-group problem is solved in each sub-domain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the sub-domains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Mu, Yi; Cai, Pengfei; Hu, Siqi; Ma, Sucan; Gao, Youhe
2014-01-01
Protein-protein interactions (PPIs) are essential events to play important roles in a series of biological processes. There are probably more ways of PPIs than we currently realized. Structural and functional investigations of weak PPIs have lagged behind those of strong PPIs due to technical difficulties. Weak PPIs are often short-lived, which may result in more dynamic signals with important biological roles within and/or between cells. For example, the characteristics of PSD-95/Dlg/ZO-1 (PDZ) domain binding to internal sequences, which are primarily weak interactions, have not yet been systematically explored. In the present study, we constructed a nearly random octapeptide yeast two-hybrid library. A total of 24 PDZ domains were used as baits for screening the library. Fourteen of these domains were able to bind internal PDZ-domain binding motifs (PBMs), and PBMs screened for nine PDZ domains exhibited strong preferences. Among 11 PDZ domains that have not been reported their internal PBM binding ability, six were confirmed to bind internal PBMs. The first PDZ domain of LNX2, which has not been reported to bind C-terminal PBMs, was found to bind internal PBMs. These results suggest that the internal PBMs binding ability of PDZ domains may have been underestimated. The data provided diverse internal binding properties for several PDZ domains that may help identify their novel binding partners.
International Nuclear Information System (INIS)
Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel
2011-01-01
A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Malagón-Romero, A.; Luque, A.
2018-04-01
At high pressure electric discharges typically grow as thin, elongated filaments. In a numerical simulation this large aspect ratio should ideally translate into a narrow, cylindrical computational domain that envelops the discharge as closely as possible. However, the development of the discharge is driven by electrostatic interactions and, if the computational domain is not wide enough, the boundary conditions imposed to the electrostatic potential on the external boundary have a strong effect on the discharge. Most numerical codes circumvent this problem by either using a wide computational domain or by calculating the boundary conditions by integrating the Green's function of an infinite domain. Here we describe an accurate and efficient method to impose free boundary conditions in the radial direction for an elongated electric discharge. To facilitate the use of our method we provide a sample implementation. Finally, we apply the method to solve Poisson's equation in cylindrical coordinates with free boundary conditions in both radial and longitudinal directions. This case is of particular interest for the initial stages of discharges in long gaps or natural discharges in the atmosphere, where it is not practical to extend the simulation volume to be bounded by two electrodes.
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
Karlova, R.B.; Weemen, W.M.J.; Naimov, S.; Ceron, J.; Dukiandjiev, S.; Maagd, de R.A.
2005-01-01
We investigated the role of domain III of Bacillus thuringiensis d-endotoxin Cry1Ac in determining toxicity against Heliothis virescens. Hybrid toxins, containing domain III of Cry1Ac with domains I and II of Cry1Ba, Cry1Ca, Cry1Da, Cry1Ea, and Cry1Fb, respectively, were created. In this way Cry1Ca,
Mao, Lingai; Chen, Zhizong; Wu, Xinyue; Tang, Xiujuan; Yao, Shuiliang; Zhang, Xuming; Jiang, Boqiong; Han, Jingyi; Wu, Zuliang; Lu, Hao; Nozaki, Tomohiro
2018-04-05
A dielectric barrier discharge (DBD) catalyst hybrid reactor with CeO 2 /γ-Al 2 O 3 catalyst balls was investigated for benzene decomposition at atmospheric pressure and 30 °C. At an energy density of 37-40 J/L, benzene decomposition was as high as 92.5% when using the hybrid reactor with 5.0wt%CeO 2 /γ-Al 2 O 3 ; while it was 10%-20% when using a normal DBD reactor without a catalyst. Benzene decomposition using the hybrid reactor was almost the same as that using an O 3 catalyst reactor with the same CeO 2 /γ-Al 2 O 3 catalyst, indicating that O 3 plays a key role in the benzene decomposition. Fourier transform infrared spectroscopy analysis showed that O 3 adsorption on CeO 2 /γ-Al 2 O 3 promotes the production of adsorbed O 2 - and O 2 2‒ , which contribute benzene decomposition over heterogeneous catalysts. Nano particles as by-products (phenol and 1,4-benzoquinone) from benzene decomposition can be significantly reduced using the CeO 2 /γ-Al 2 O 3 catalyst. H 2 O inhibits benzene decomposition; however, it improves CO 2 selectivity. The deactivated CeO 2 /γ-Al 2 O 3 catalyst can be regenerated by performing discharges at 100 °C and 192-204 J/L. The decomposition mechanism of benzene over CeO 2 /γ-Al 2 O 3 catalyst was proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Wang Shumin; Duyn, Jeff H
2008-01-01
A hybrid method that combines the finite-difference time-domain (FDTD) method and the finite-element time-domain (FETD) method is presented for simulating radio-frequency (RF) coils in magnetic resonance imaging. This method applies a high-fidelity FETD method to RF coils, while the human body is modeled with a low-cost FDTD method. Since the FDTD and the FETD methods are applied simultaneously, the dynamic interaction between RF coils and the human body is fully accounted for. In order to simplify the treatment of the highly irregular FDTD/FETD interface, composite elements are proposed. Two examples are provided to demonstrate the validity and effectiveness of the hybrid method in high-field receive-and-transmit coil design. This approach is also applicable to general bio-electromagnetic simulations
International Nuclear Information System (INIS)
Tsuji, Masashi; Chiba, Gou
2000-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)
DEFF Research Database (Denmark)
Jensen, Jonas Kjær; Ommen, Torben Schmidt; Markussen, Wiebke Brix
2015-01-01
The ammonia-water hybrid absorption-compression heat pump (HACHP) has been proposed as a relevant technology for industrial heat supply, especially for high sink temperatures and high temperature glides in the sink and source. This is due to the reduced vapour pressure and the non-isothermal phase...... change of the zeotropic mixture, ammonia-water. To evaluate to which extent these advantages can be translated into feasible heat pump solutions, the working domain of the HACHP is investigated based on technical and economic constraints. The HACHP working domain is compared to that of the best available...... vapour compression heat pump with natural working fluids. This shows that the HACHP increases the temperature lifts and heat supply temperatures that are feasible to produce with a heat pump. The HACHP is shown to be capable of delivering heat supply temperatures as high as 150 °C and temperature lifts...
DEFF Research Database (Denmark)
Jensen, Jonas Kjær; Ommen, Torben Schmidt; Markussen, Wiebke Brix
2014-01-01
The ammonia-water hybrid absorption-compression heat pump (HACHP) is a relevant technology for industrial heat supply, especially for high sink temperatures and high temperature glides in the sink and source. This is due to the reduced vapour pressure and the non-isothermal phase change...... of the zeotropic mixture, ammonia-water. To evaluate to which extent these advantages can be translated into feasible heat pump solutions, the working domain of the HACHP is investigated based on technical and economic constraints. The HACHP working domain is compared to that of the best possible vapour...... compression heat pump with natural working fluids. This shows that the HACHP increases the temperature lifts and heat supply temperatures that are feasible to produce with a heat pump. The HACHP is shown to be capable of delivering heat supply temperatures as high as 140 XC and temperature lifts up to 60 K...
Gao, Bin; Huang, Fei-Ting; Wang, Yazhong; Kim, Jae-Wook; Wang, Lihai; Lim, Seong-Joon; Cheong, Sang-Wook
2017-05-01
Ca3Mn2O7 and Ca3Ti2O7 have been proposed as the prototypical hybrid improper ferroelectrics (HIFs), and a significant magnetoelectric (ME) coupling in magnetic Ca3Mn2O7 is, in fact, reported theoretically and experimentally. Although the switchability of polarization is confirmed in Ca3Ti2O7 and other non-magnetic HIFs, there is no report of switchable polarization in the isostructural Ca3Mn2O7. We constructed the phase diagram of Ca3Mn2-xTixO7 through our systematic study of a series of single crystalline Ca3Mn2-xTixO7 (x = 0, 0.1, 1, 1.5, and 2). Using transmission electron microscopy, we have unveiled the unique domain structure of Ca3Mn2O7: the high-density 90° stacking of a- and b-domains along the c-axis due to the phase transition through an intermediate Acca phase and the in-plane irregular wavy ferroelastic twin domains. The interrelation between domain structures and physical properties is unprecedented: the stacking along the c-axis prevents the switching of polarization and causes the irregular in-plane ferroelastic domain pattern. In addition, we have determined the magnetic phase diagram and found complex magnetism of Ca3Mn2O7 with isotropic canted moments. These results lead to negligible observable ME coupling in Ca3Mn2O7 and guide us to explore multiferroics with large ME coupling.
Chen, JianFeng; Takagi, Junichi; Xie, Can; Xiao, Tsan; Luo, Bing-Hao; Springer, Timothy A.
2004-01-01
We examined the effect of conformational change at the β7 I-like/hybrid domain interface on regulating the transition between rolling and firm adhesion by integrin α4β7. An N-glycosylation site was introduced into the I-like/hybrid domain interface to act as a wedge and to stabilize the open conformation of this interface and hence the open conformation of the α4β7 headpiece. Wild-type α4β7 mediates rolling adhesion in Ca2+ and Ca2+/Mg2+ but firm adhesion in Mg2+ and Mn2+. Stabilizing the ope...
Solving Problems in Various Domains by Hybrid Models of High Performance Computations
Directory of Open Access Journals (Sweden)
Yurii Rogozhin
2014-03-01
Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.
Zheng, Xiang; Yang, Chao; Cai, Xiaochuan; Keyes, David E.
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation
Pagan Munoz, R.; Hornikx, M.C.J.
The wave-based Fourier Pseudospectral time-domain (Fourier-PSTD) method was shown to be an effective way of modeling outdoor acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV
International Nuclear Information System (INIS)
Hu, Zi-Yu; Shao, Xiaohong; Wang, Da; Liu, Li-Min; Johnson, J. Karl
2014-01-01
First-principles calculations are performed to investigate the adsorption of hydrogen onto Li-decorated hybrid boron nitride and graphene domains of (BN) x C 1−x complexes with x = 1, 0.25, 0.5, 0.75, 0, and B 0.125 C 0.875 . The most stable adsorption sites for the nth hydrogen molecule in the lithium-decorated (BN) x C 1−x complexes are systematically discussed. The most stable adsorption sites were affected by the charge localization, and the hydrogen molecules were favorably located above the C-C bonds beside the Li atom. The results show that the nitrogen atoms in the substrate planes could increase the hybridization between the 2p orbitals of Li and the orbitals of H 2 . The results revealed that the (BN) x C 1−x complexes not only have good thermal stability but they also exhibit a high hydrogen storage of 8.7% because of their dehydrogenation ability
Directory of Open Access Journals (Sweden)
A. Becker
2003-01-01
Full Text Available In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.
DEFF Research Database (Denmark)
Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin
2017-01-01
method which is unconditionally stable. We solve the diffusion equation for the electric field with a total field formulation. The finite element system of equation is solved using the direct method. The solutions of electric field, at different time, can be obtained using the effective time stepping...... method with trivial computation cost once the matrix is factorized. We try to keep the same time step size for a fixed number of steps using an adaptive time step doubling (ATSD) method. The finite element modeling domain is also truncated using a semi-adaptive method. We proposed a new boundary...... condition based on approximating the total field on the modeling boundary using the primary field corresponding to a layered background model. We validate our algorithm using several synthetic model studies....
A hybrid method of estimating pulsating flow parameters in the space-time domain
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
Chen, JianFeng; Takagi, Junichi; Xie, Can; Xiao, Tsan; Luo, Bing-Hao; Springer, Timothy A
2004-12-31
We examined the effect of conformational change at the beta(7) I-like/hybrid domain interface on regulating the transition between rolling and firm adhesion by integrin alpha(4)beta(7). An N-glycosylation site was introduced into the I-like/hybrid domain interface to act as a wedge and to stabilize the open conformation of this interface and hence the open conformation of the alpha(4) beta(7) headpiece. Wild-type alpha(4)beta(7) mediates rolling adhesion in Ca(2+) and Ca(2+)/Mg(2+) but firm adhesion in Mg(2+) and Mn(2+). Stabilizing the open headpiece resulted in firm adhesion in all divalent cations. The interaction between metal binding sites in the I-like domain and the interface with the hybrid domain was examined in double mutants. Changes at these two sites can either counterbalance one another or be additive, emphasizing mutuality and the importance of multiple interfaces in integrin regulation. A double mutant with counterbalancing deactivating ligand-induced metal ion binding site (LIMBS) and activating wedge mutations could still be activated by Mn(2+), confirming the importance of the adjacent to metal ion-dependent adhesion site (ADMIDAS) in integrin activation by Mn(2+). Overall, the results demonstrate the importance of headpiece allostery in the conversion of rolling to firm adhesion.
Chen, JianFeng; Takagi, Junichi; Xie, Can; Xiao, Tsan; Luo, Bing-Hao; Springer, Timothy A.
2015-01-01
We examined the effect of conformational change at the β7 I-like/hybrid domain interface on regulating the transition between rolling and firm adhesion by integrin α4β7. An N-glycosylation site was introduced into the I-like/hybrid domain interface to act as a wedge and to stabilize the open conformation of this interface and hence the open conformation of the α4β7 headpiece. Wild-type α4β7 mediates rolling adhesion in Ca2+ and Ca2+/Mg2+ but firm adhesion in Mg2+ and Mn2+. Stabilizing the open headpiece resulted in firm adhesion in all divalent cations. The interaction between metal binding sites in the I-like domain and the interface with the hybrid domain was examined in double mutants. Changes at these two sites can either counterbalance one another or be additive, emphasizing mutuality and the importance of multiple interfaces in integrin regulation. A double mutant with counterbalancing deactivating ligand-induced metal ion binding site (LIMBS) and activating wedge mutations could still be activated by Mn2+, confirming the importance of the adjacent to metal ion-dependent adhesion site (ADMIDAS) in integrin activation by Mn2+. Overall, the results demonstrate the importance of headpiece allostery in the conversion of rolling to firm adhesion. PMID:15448154
Pan, Xiaoyong; Shen, Hong-Bin
2017-02-28
RNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation. In viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6
Directory of Open Access Journals (Sweden)
Batakliev Todor
2014-06-01
Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Long, Run; Prezhdo, Oleg V
2015-07-08
Hybrid organic/inorganic polymer/quantum dot (QD) solar cells are an attractive alternative to the traditional cells. The original, simple models postulate that one-dimensional polymers have continuous energy levels, while zero-dimensional QDs exhibit atom-like electronic structure. A realistic, atomistic viewpoint provides an alternative description. Electronic states in polymers are molecule-like: finite in size and discrete in energy. QDs are composed of many atoms and have high, bulk-like densities of states. We employ ab initio time-domain simulation to model the experimentally observed ultrafast photoinduced dynamics in a QD/polymer hybrid and show that an atomistic description is essential for understanding the time-resolved experimental data. Both electron and hole transfers across the interface exhibit subpicosecond time scales. The interfacial processes are fast due to strong electronic donor-acceptor, as evidenced by the densities of the photoexcited states which are delocalized between the donor and the acceptor. The nonadiabatic charge-phonon coupling is also strong, especially in the polymer, resulting in rapid energy losses. The electron transfer from the polymer is notably faster than the hole transfer from the QD, due to a significantly higher density of acceptor states. The stronger molecule-like electronic and charge-phonon coupling in the polymer rationalizes why the electron-hole recombination inside the polymer is several orders of magnitude faster than in the QD. As a result, experiments exhibit multiple transfer times for the long-lived hole inside the QD, ranging from subpicoseconds to nanoseconds. In contrast, transfer of the short-lived electron inside the polymer does not occur beyond the first picosecond. The energy lost by the hole on its transit into the polymer is accommodated by polymer's high-frequency vibrations. The energy lost by the electron injected into the QD is accommodated primarily by much lower-frequency collective and
Directory of Open Access Journals (Sweden)
Wen Hwa Lee
Full Text Available Agonist-stimulated platelet activation triggers conformational changes of integrin αIIbβ3, allowing fibrinogen binding and platelet aggregation. We have previously shown that an octapeptide, p1YMESRADR8, corresponding to amino acids 313-320 of the β-ribbon extending from the β-propeller domain of αIIb, acts as a potent inhibitor of platelet aggregation. Here we have performed in silico modelling analysis of the interaction of this peptide with αIIbβ3 in its bent and closed (not swing-out conformation and show that the peptide is able to act as a substitute for the β-ribbon by forming a clasp restraining the β3 hybrid and βI domains in a closed conformation. The involvement of species-specific residues of the β3 hybrid domain (E356 and K384 and the β1 domain (E297 as well as an intrapeptide bond (pE315-pR317 were confirmed as important for this interaction by mutagenesis studies of αIIbβ3 expressed in CHO cells and native or substituted peptide inhibitory studies on platelet functions. Furthermore, NMR data corroborate the above results. Our findings provide insight into the important functional role of the αIIb β-ribbon in preventing integrin αIIbβ3 head piece opening, and highlight a potential new therapeutic approach to prevent integrin ligand binding.
International Nuclear Information System (INIS)
Rios, Xabier; Moriones, Paula; Echeverría, Jesús C.; Luquin, Asunción; Laguna, Mariano; Garrido, Julián J.
2013-01-01
Hybrid silica xerogels favourably combine the properties of organic and inorganic components in one material; consequently these materials are useful for multiple applications. The versatility and mild synthetic conditions provided by the sol-gel process are ideal for the synthesis of hybrid materials. The specific aims of this study were to synthesise hybrid xerogels in acidic media using tetraethoxysilane (TEOS) and ethyltriethoxysilane (ETEOS) as silica precursors, and to assess the role of the ethyl group as a matrix modifier and inducer of ordered domains in xerogels. All xerogels were synthesised at pH 4.5, at 60 °C, with 1:4.75:5.5 TEOS:EtOH:H 2 O molar ratio. Gelation time exponentially increased with the ETEOS molar ratio. Incorporation of the ethyl groups into the structure of xerogels reduced cross-linking, increased the average siloxane bond length, and promoted the formation of ordered domains. As a result, a transition from Q n to T n signals detected in the 29 Si NMR spectra, the Si–O structural band in the FTIR spectra shifted to lower wavelength, and a new peak in the XRD pattern at 2θ < 10° appeared in the XRD patterns. Mass spectroscopy detected fragments with high numbers of silicon atoms and a polymeric distribution. - Graphical abstract: Display Omitted - Highlights: • Hybrid xerogels were synthesised for ETEOS/TEOS mixtures up to 80% ETEOS. • The gelification time exponentially increased with ETEOS content. • FTIR, XRD and MAS NMR demonstrated the presence of ethyl groups into xerogels. • For ETEOS contents ≤30%, ethyl group acted as matrix modifier. • For ETEOS contents ≥30%, ethyl groups induced the formation of ordered domains
Xiao, Manqiu; Dong, Shanshan; Li, Zhaolei; Tang, Xu; Chen, Yi; Yang, Shengmao; Wu, Chunyan; Ouyang, Dongxin; Fang, Changming; Song, Zhiping
2015-12-01
Rice is the staple diet of over half of the world's population and Bacillus thuringiensis (Bt) rice expressing insecticidal Cry proteins is ready for deployment. An assessment of the potential impact of Bt rice on the soil ecosystem under varied field management practices is urgently required. We used litter bags to assess the residue (leaves, stems and roots) decomposition dynamics of two transgenic rice lines (Kefeng6 and Kefeng8) containing stacked genes from Bt and sck (a modified CpTI gene encoding a cowpea trypsin inhibitor) (Bt/CpTI), a non-transgenic rice near-isoline (Minghui86), wild rice (Oryza rufipogon) and crop-wild Bt rice hybrid under contrasting conditions (drainage or continuous flooding) in the field. No significant difference was detected in the remaining mass, total C and total N among cultivars under aerobic conditions, whereas significant differences in the remaining mass and total C were detected between Kefeng6 and Kefeng8 and Minghui86 under the flooded condition. A higher decomposition rate constant (km) was measured under the flooded condition compared with the aerobic condition for leaf residues, whereas the reverse was observed for root residues. The enzyme-linked immunosorbent assay (ELISA), which was used to monitor the changes in the Cry1Ac protein in Bt rice residues, indicated that (1) the degradation of the Cry1Ac protein under both conditions best fit first-order kinetics, and the predicted DT50 (50% degradation time) of the Cry1Ac protein ranged from 3.6 to 32.5 days; (2) the Cry1Ac protein in the residue degraded relatively faster under aerobic conditions; and (3) by the end of the study (~154 days), the protein was present at a low concentration in the remaining residues under both conditions. The degradation rate constant was negatively correlated with the initial carbon content and positively correlated with the initial Cry1Ac protein concentration, but it was only correlated with the mass decomposition rate constants under
DEFF Research Database (Denmark)
Pan, Xiaoyong; Shen, Hong Bin
2017-01-01
, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains...... space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can...... be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in i...
International Nuclear Information System (INIS)
Sakko, Arto; Rossi, Tuomas P; Nieminen, Risto M
2014-01-01
The presence of plasmonic material influences the optical properties of nearby molecules in untrivial ways due to the dynamical plasmon-molecule coupling. We combine quantum and classical calculation schemes to study this phenomenon in a hybrid system that consists of a Na 2 molecule located in the gap between two Au/Ag nanoparticles. The molecule is treated quantum-mechanically with time-dependent density-functional theory, and the nanoparticles with quasistatic classical electrodynamics. The nanoparticle dimer has a plasmon resonance in the visible part of the electromagnetic spectrum, and the Na 2 molecule has an electron-hole excitation in the same energy range. Due to the dynamical interaction of the two subsystems the plasmon and the molecular excitations couple, creating a hybridized molecular-plasmon excited state. This state has unique properties that yield e.g. enhanced photoabsorption compared to the freestanding Na 2 molecule. The computational approach used enables decoupling of the mutual plasmon-molecule interaction, and our analysis verifies that it is not legitimate to neglect the backcoupling effect when describing the dynamical interaction between plasmonic material and nearby molecules. Time-resolved analysis shows nearly instantaneous formation of the coupled state, and provides an intuitive picture of the underlying physics. (paper)
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas; Osher, Stanley; Schö nlieb, Carola-Bibiane
2012-01-01
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems
Directory of Open Access Journals (Sweden)
Andranik Tsakanian
2012-05-01
Full Text Available In particle accelerators a preferred direction, the direction of motion, is well defined. If in a numerical calculation the (numerical dispersion in this direction is suppressed, a quite coarse mesh and moderate computational resources can be used to reach accurate results even for extremely short electron bunches. Several approaches have been proposed in the past decades to reduce the accumulated dispersion error in wakefield calculations for perfectly conducting structures. In this paper we extend the TE/TM splitting algorithm to a new hybrid scheme that allows for wakefield calculations in structures with walls of finite conductivity. The conductive boundary is modeled by one-dimensional wires connected to each boundary cell. A good agreement of the numerical simulations with analytical results and other numerical approaches is obtained.
Urcuqui-Inchima, S; Walter, J; Drugeon, G; German-Retana, S; Haenni, A L; Candresse, T; Bernardi, F; Le Gall, O
1999-05-25
Using the yeast two-hybrid system, a screen was performed for possible interactions between the proteins encoded by the 5' region of potyviral genomes [P1, helper component-proteinase (HC-Pro), and P3]. A positive self-interaction involving HC-Pro was detected with lettuce mosaic virus (LMV) and potato virus Y (PVY). The possibility of heterologous interaction between the HC-Pro of LMV and of PVY was also demonstrated. No interaction involving either the P1 or the P3 proteins was detected. A series of ordered deletions from either the N- or C-terminal end of the LMV HC-Pro was used to map the domain involved in interaction to the 72 N-terminal amino acids of the protein, a region known to be dispensable for virus viability but necessary for aphid transmission. A similar but less detailed analysis mapped the interacting domain to the N-terminal half of the PVY HC-Pro. Copyright 1999 Academic Press.
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2015-01-01
Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP sequences.
Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni
2015-04-01
We study turbulence generated by the Lower Hybrid Drift Instability (LHDI [1]) in the terrestrial magnetosphere. The problem is not only of interest per se, but also for the implications it can have for the so-called turbulent reconnection. The LHDI evolution is simulated with the PIC Multi Level Multi Domain code Parsek2D-MLMD [2,3], which simulates different parts of the domain with different spatial and temporal resolutions. This allows to satisfy, at a low computing cost, the two necessary requirements for LHDI turbulence simulations: 1) a large domain, to capture the high wavelength branch of the LHDI and of the secondary kink instability and 2) high resolution, to cover the high wavenumber part of the power spectrum and to capture the wavenumber at which the turbulent cascade ends. The turbulent cascade proceeds seamlessly from the coarse (low resolution) to the refined (high resolution) grid, the only one resolved enough to capture its end, which is studied here and related to wave-particle interaction processes. We also comment upon the role of smoothing (a common technique used in PIC simulations to reduce particle noise, [4]) in simulations of turbulence and on how its effects on power spectra may be easily mistaken, in absence of accurate convergence studies, for the end of the inertial range. [1] P. Gary, Theory of space plasma microinstabilities, Cambridge Atmospheric and Space Science Series, 2005. [2] M. E. Innocenti, G. Lapenta, S. Markidis, A. Beck, A. Vapirev, Journal of Computational Physics 238 (2013) 115 - 140. [3] M. E. Innocenti, A. Beck, T. Ponweiser, S. Markidis, G. Lapenta, Computer Physics Communications (accepted) (2014). [4] C. K. Birdsall, A. B. Langdon, Plasma physics via computer simulation, Taylor and Francis, 2004.
Liu, Lihong; Fang, Wei-Hai; Long, Run; Prezhdo, Oleg V
2018-03-01
Nonradiative electron-hole recombination plays a key role in determining photon conversion efficiencies in solar cells. Experiments demonstrate significant reduction in the recombination rate upon passivation of methylammonium lead iodide perovskite with Lewis base molecules. Using nonadiabatic molecular dynamics combined with time-domain density functional theory, we find that the nonradiative charge recombination is decelerated by an order of magnitude upon adsorption of the molecules. Thiophene acts by the traditional passivation mechanism, forcing electron density away from the surface. In contrast, pyridine localizes the electron at the surface while leaving it energetically near the conduction band edge. This is because pyridine creates a stronger coordinative bond with a lead atom of the perovskite and has a lower energy unoccupied orbital compared with thiophene due to the more electronegative nitrogen atom relative to thiophene's sulfur. Both molecules reduce two-fold the nonadiabatic coupling and electronic coherence time. A broad range of vibrational modes couple to the electronic subsystem, arising from inorganic and organic components. The simulations reveal the atomistic mechanisms underlying the enhancement of the excited-state lifetime achieved by the perovskite passivation, rationalize the experimental results, and advance our understanding of charge-phonon dynamics in perovskite solar cells.
A multi-domain spectral method for time-fractional differential equations
Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.
2015-07-01
This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.
Spinodal decomposition in fluid mixtures
International Nuclear Information System (INIS)
Kawasaki, Kyozi; Koga, Tsuyoshi
1993-01-01
We study the late stage dynamics of spinodal decomposition in binary fluids by the computer simulation of the time-dependent Ginzburg-Landau equation. We obtain a temporary linear growth law of the characteristic length of domains in the late stage. This growth law has been observed in many real experiments of binary fluids and indicates that the domain growth proceeds by the flow caused by the surface tension of interfaces. We also find that the dynamical scaling law is satisfied in this hydrodynamic domain growth region. By comparing the scaling functions for fluids with that for the case without hydrodynamic effects, we find that the scaling functions for the two systems are different. (author)
Kraus, Jodi; Gupta, Rupal; Yehl, Jenna; Lu, Manman; Case, David A; Gronenborn, Angela M; Akke, Mikael; Polenova, Tatyana
2018-03-22
Magic angle spinning NMR spectroscopy is uniquely suited to probe the structure and dynamics of insoluble proteins and protein assemblies at atomic resolution, with NMR chemical shifts containing rich information about biomolecular structure. Access to this information, however, is problematic, since accurate quantum mechanical calculation of chemical shifts in proteins remains challenging, particularly for 15 N H . Here we report on isotropic chemical shift predictions for the carbohydrate recognition domain of microcrystalline galectin-3, obtained from using hybrid quantum mechanics/molecular mechanics (QM/MM) calculations, implemented using an automated fragmentation approach, and using very high resolution (0.86 Å lactose-bound and 1.25 Å apo form) X-ray crystal structures. The resolution of the X-ray crystal structure used as an input into the AF-NMR program did not affect the accuracy of the chemical shift calculations to any significant extent. Excellent agreement between experimental and computed shifts is obtained for 13 C α , while larger scatter is observed for 15 N H chemical shifts, which are influenced to a greater extent by electrostatic interactions, hydrogen bonding, and solvation.
Guimarães, Wellinson G; Gondim, Ana C S; Costa, Pedro Mikael da Silva; Gilles-Gonzalez, Marie-Alda; Lopes, Luiz G F; Carepo, Marta S P; Sousa, Eduardo H S
2017-07-01
FixL from Rhizobium etli (ReFixL) is a hybrid oxygen sensor protein. Signal transduction in ReFixL is effected by a switch off of the kinase activity on binding of an oxygen molecule to ferrous heme iron in another domain. Cyanide can also inhibit the kinase activity upon binding to the heme iron in the ferric state. The unfolding by urea of the purified full-length ReFixL in both active pentacoordinate form, met-FixL(Fe III ) and inactive cyanomet-FixL (Fe III -CN - ) form was monitored by UV-visible absorption spectroscopy, circular dichroism (CD) and fluorescence spectroscopy. The CD and UV-visible absorption spectroscopy revealed two states during unfolding, whereas fluorescence spectroscopy identified a three-state unfolding mechanism. The unfolding mechanism was not altered for the active compared to the inactive state; however, differences in the ΔG H2O were observed. According to the CD results, compared to cyanomet-FixL, met-FixL was more stable towards chemical denaturation by urea (7.2 vs 4.8kJmol -1 ). By contrast, electronic spectroscopy monitoring of the Soret band showed cyanomet-FixL to be more stable than met-FixL (18.5 versus 36.2kJmol -1 ). For the three-state mechanism exhibited by fluorescence, the ΔG H2O for both denaturation steps were higher for the active-state met-FixL than for cyanomet-FixL. The overall stability of met-FixL is higher in comparison to cyanomet-FixL suggesting a more compact protein in the active form. Nonetheless, hydrogen bonding by bound cyanide in the inactive state promotes the stability of the heme domain. This work supports a model of signal transduction by FixL that is likely shared by other heme-based sensors. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
NONE
1977-03-01
This paper describes water decomposition by using a hybrid cycle composed of thermo-chemistry and photo-chemistry. Ferric sulfate and HI are obtained from ferrous sulfate and iodine via photo-chemical reaction. This is an endothermic reaction of 10.8 kcal. Then, the photo-chemically reacted aqueous solution is electrolysed to separate HI, while Fe{sup 3+} (ferric ion) is reduced and converted into Fe{sup 2+} (ferrous ion). Oxygen is generated at this time. Since mixed potential is made from iron oxidation and reduction potential and iodine potential, the electrolytic efficiency is greatly influenced by electrode materials. Ideally, an electrode material that causes only the reduction of Fe{sup 3+}, but not other reactions is preferable. The HI is decomposed into hydrogen and iodine by electrolysis. Research is continuing to acquire hydrogen from HI thermo-chemically. Endothermic reaction heat of 7 to 8 kcal has been obtained in photo-chemical reaction, the heat quantity being close to the theoretical value of 10.8. A result close to the theoretical value may be expected if the electrode material problem is solved. The basic research will be continued for a high possibility of linking the research to a pilot plant in the future. (NEDO)
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Thermal decomposition of pyrite
International Nuclear Information System (INIS)
Music, S.; Ristic, M.; Popovic, S.
1992-01-01
Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Decomposition of Sodium Tetraphenylborate
International Nuclear Information System (INIS)
Barnes, M.J.
1998-01-01
The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Azimuthal decomposition of optical modes
CSIR Research Space (South Africa)
Dudley, Angela L
2012-07-01
Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...
Cellular decomposition in vikalloys
International Nuclear Information System (INIS)
Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.
1981-01-01
Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru
Daverman, Robert J
2007-01-01
Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve
Photochemical decomposition of catecholamines
International Nuclear Information System (INIS)
Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.
1979-01-01
During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)
DEFF Research Database (Denmark)
Haberland, Hartmut
2005-01-01
politicians and in the media, especially in the discussion whether some languages undergo ‘domain loss’ vis-à-vis powerful international languages like English. An objection that has been raised here is that domains, as originally conceived, are parameters of language choice and not properties of languages...
Decomposing Nekrasov decomposition
Energy Technology Data Exchange (ETDEWEB)
Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)
2016-02-16
AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.
Decomposing Nekrasov decomposition
International Nuclear Information System (INIS)
Morozov, A.; Zenkevich, Y.
2016-01-01
AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
International Nuclear Information System (INIS)
Macasek, F.; Buriova, E.
2004-01-01
In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%
Hybrid Finite Element and Volume Integral Methods for Scattering Using Parametric Geometry
DEFF Research Database (Denmark)
Volakis, John L.; Sertel, Kubilay; Jørgensen, Erik
2004-01-01
n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integral method are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....
Multiscale infrared and visible image fusion using gradient domain guided image filtering
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
The cover picture shows the binding of a PLHSpT derivative, 6q, to the polo-like kinase 1 (Plk1) polo-box domain (PBD), thereby uncovering a new hydrophobic channel (magnified upper right), which is absent in the unliganded protein (magnified lower left). The authors explain how, as a consequence of the additional interaction with the channel, the peptide binds to the Plk1 PBD
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Cacciatori, Sergio L; Marrani, Alessio
2013-01-01
By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Toeplitz quantization and asymptotic expansions : Peter Weyl decomposition
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav; Upmeier, H.
2010-01-01
Roč. 68, č. 3 (2010), s. 427-449 ISSN 0378-620X R&D Projects: GA ČR GA201/09/0473 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded symmetric domain * real symmetric domain * star product * Toeplitz operator * Peter-Weyl decomposition Subject RIV: BA - General Mathematics Impact factor: 0.521, year: 2010 http://link.springer.com/article/10.1007%2Fs00020-010-1808-5
DEFF Research Database (Denmark)
Hjørland, Birger
2017-01-01
The domain-analytic approach to knowledge organization (KO) (and to the broader field of library and information science, LIS) is outlined. The article reviews the discussions and proposals on the definition of domains, and provides an example of a domain-analytic study in the field of art studies....... Varieties of domain analysis as well as criticism and controversies are presented and discussed....
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Danburite decomposition by sulfuric acid
International Nuclear Information System (INIS)
Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.
2011-01-01
Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.
Thermal decomposition of lutetium propionate
DEFF Research Database (Denmark)
Grivel, Jean-Claude
2010-01-01
The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...
Texture of lipid bilayer domains
DEFF Research Database (Denmark)
Jensen, Uffe Bernchou; Brewer, Jonathan R.; Midtiby, Henrik Skov
2009-01-01
We investigate the texture of gel (g) domains in binary lipid membranes composed of the phospholipids DPPC and DOPC. Lateral organization of lipid bilayer membranes is a topic of fundamental and biological importance. Whereas questions related to size and composition of fluid membrane domain...... are well studied, the possibility of texture in gel domains has so far not been examined. When using polarized light for two-photon excitation of the fluorescent lipid probe Laurdan, the emission intensity is highly sensitive to the angle between the polarization and the tilt orientation of lipid acyl...... chains. By imaging the intensity variations as a function of the polarization angle, we map the lateral variations of the lipid tilt within domains. Results reveal that gel domains are composed of subdomains with different lipid tilt directions. We have applied a Fourier decomposition method...
Martinez-Serra, Jordi; Gutiérrez, Antonio; Marcús, Toni F; Soverini, Simona; Amat, Juan Carlos; Navarro-Palou, María; Ros, Teresa; Bex, Teresa; Ballester, Carmen; Bauça, Josep Miquel; SanFelix, Sara; Novo, Andrés; Vidal, Carmen; Santos, Carmen; Besalduch, Joan
2012-03-01
Within the laboratory protocols, used for the study of BCR-ABL resistance mutations in chronic myeloid leukemia patients treated with Imatinib, direct sequencing remains the reference method. Since the incidence of patients with a mutation-related loss of response is not very high, it is very useful in the routine laboratory to perform a fast pre-screening method. With this in mind, we have designed a new technique, based on a single Real-Time FRET-based PCR, followed by a study of melting peaks. This new tool, developed in a LightCycler 2.0, combines four different fluorescence channels for the simultaneous detection, in a single close tube, of critical mutations within the ABL kinase domain. Assay evaluation performed on 33 samples, previously genotyped by sequentiation, resulted in full concordance of results. This new methodology detects in a few steps the presence of critical mutations associated to Imatinib resistance. Copyright © 2012 Elsevier Inc. All rights reserved.
Kahn, G.; Plotkin, G.D.
1993-01-01
This paper introduces the theory of a particular kind of computation domains called concrete domains. The purpose of this theory is to find a satisfactory framework for the notions of coroutine computation and sequentiality of evaluation.
Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2018-03-01
We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.
Erbium hydride decomposition kinetics.
Energy Technology Data Exchange (ETDEWEB)
Ferrizz, Robert Matthew
2006-11-01
Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.
International Nuclear Information System (INIS)
Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.
2011-01-01
We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.
Bjørner, Dines
Before software can be designed we must know its requirements. Before requirements can be expressed we must understand the domain. So it follows, from our dogma, that we must first establish precise descriptions of domains; then, from such descriptions, “derive” at least domain and interface requirements; and from those and machine requirements design the software, or, more generally, the computing systems.
A hybrid algorithm for parallel molecular dynamics simulations
Mangiardi, Chris M.; Meyer, R.
2017-10-01
This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.; Langer, U.
2010-01-01
of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series
Multigrid and multilevel domain decomposition for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Chan, T.; Smith, B.
1994-12-31
Multigrid has proven itself to be a very versatile method for the iterative solution of linear and nonlinear systems of equations arising from the discretization of PDES. In some applications, however, no natural multilevel structure of grids is available, and these must be generated as part of the solution procedure. In this presentation the authors will consider the problem of generating a multigrid algorithm when only a fine, unstructured grid is given. Their techniques generate a sequence of coarser grids by first forming an approximate maximal independent set of the vertices and then applying a Cavendish type algorithm to form the coarser triangulation. Numerical tests indicate that convergence using this approach can be as fast as standard multigrid on a structured mesh, at least in two dimensions.
A hybrid perturbation-Galerkin technique for partial differential equations
Geer, James F.; Anderson, Carl M.
1990-01-01
A two-step hybrid perturbation-Galerkin technique for improving the usefulness of perturbation solutions to partial differential equations which contain a parameter is presented and discussed. In the first step of the method, the leading terms in the asymptotic expansion(s) of the solution about one or more values of the perturbation parameter are obtained using standard perturbation methods. In the second step, the perturbation functions obtained in the first step are used as trial functions in a Bubnov-Galerkin approximation. This semi-analytical, semi-numerical hybrid technique appears to overcome some of the drawbacks of the perturbation and Galerkin methods when they are applied by themselves, while combining some of the good features of each. The technique is illustrated first by a simple example. It is then applied to the problem of determining the flow of a slightly compressible fluid past a circular cylinder and to the problem of determining the shape of a free surface due to a sink above the surface. Solutions obtained by the hybrid method are compared with other approximate solutions, and its possible application to certain problems associated with domain decomposition is discussed.
Hybrid parallel strategy for the simulation of fast transient accidental situations at reactor scale
International Nuclear Information System (INIS)
Faucher, V.; Galon, P.; Beccantini, A.; Crouzet, F.; Debaud, F.; Gautier, T.
2013-01-01
This contribution is dedicated to the latest methodological developments implemented in the fast transient dynamics software EUROPLEXUS (EPX) to simulate the mechanical response of fully coupled fluid-structure systems to accidental situations to be considered at reactor scale, among which the Loss of Coolant Accident, the Core Disruptive Accident and the Hydrogen Explosion. Time integration is explicit and the search for reference solutions within the safety framework prevents any simplification and approximations in the coupled algorithm: for instance, all kinematic constraints are dealt with using Lagrange Multipliers, yielding a complex flow chart when non-permanent constraints such as unilateral contact or immersed fluid-structure boundaries are considered. The parallel acceleration of the solution process is then achieved through a hybrid approach, based on a weighted domain decomposition for distributed memory computing and the use of the KAAPI library for self-balanced shared memory processing inside sub-domains. (authors)
Danburite decomposition by hydrochloric acid
International Nuclear Information System (INIS)
Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.
2011-01-01
Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.
AUTONOMOUS GAUSSIAN DECOMPOSITION
Energy Technology Data Exchange (ETDEWEB)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.
AUTONOMOUS GAUSSIAN DECOMPOSITION
International Nuclear Information System (INIS)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John
2015-01-01
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes
NRSA enzyme decomposition model data
U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...
Real interest parity decomposition
Directory of Open Access Journals (Sweden)
Alex Luiz Ferreira
2009-09-01
Full Text Available The aim of this paper is to investigate the general causes of real interest rate differentials (rids for a sample of emerging markets for the period of January 1996 to August 2007. To this end, two methods are applied. The first consists of breaking the variance of rids down into relative purchasing power pariety and uncovered interest rate parity and shows that inflation differentials are the main source of rids variation; while the second method breaks down the rids and nominal interest rate differentials (nids into nominal and real shocks. Bivariate autoregressive models are estimated under particular identification conditions, having been adequately treated for the identified structural breaks. Impulse response functions and error variance decomposition result in real shocks as being the likely cause of rids.O objetivo deste artigo é investigar as causas gerais dos diferenciais da taxa de juros real (rids para um conjunto de países emergentes, para o período de janeiro de 1996 a agosto de 2007. Para tanto, duas metodologias são aplicadas. A primeira consiste em decompor a variância dos rids entre a paridade do poder de compra relativa e a paridade de juros a descoberto e mostra que os diferenciais de inflação são a fonte predominante da variabilidade dos rids; a segunda decompõe os rids e os diferenciais de juros nominais (nids em choques nominais e reais. Sob certas condições de identificação, modelos autorregressivos bivariados são estimados com tratamento adequado para as quebras estruturais identificadas e as funções de resposta ao impulso e a decomposição da variância dos erros de previsão são obtidas, resultando em evidências favoráveis a que os choques reais são a causa mais provável dos rids.
Techno-economic comparison of series hybrid, plug-in hybrid, fuel cell and regular cars
van Vliet, O.P.R.|info:eu-repo/dai/nl/288519361; Kruithof, T.; Turkenburg, W.C.|info:eu-repo/dai/nl/073416355; Faaij, A.P.C.|info:eu-repo/dai/nl/10685903X
2010-01-01
We examine the competitiveness of series hybrid compared to fuel cell, parallel hybrid, and regular cars. We use public domain data to determine efficiency, fuel consumption, total costs of ownership and greenhouse gas emissions resulting from drivetrain choices. The series hybrid drivetrain can be
On the hadron mass decomposition
Lorcé, Cédric
2018-02-01
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.
On the hadron mass decomposition
Energy Technology Data Exchange (ETDEWEB)
Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)
2018-02-15
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)
Abstract decomposition theorem and applications
Grossberg, R; Grossberg, Rami; Lessmann, Olivier
2005-01-01
Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).
A multiconfigurational hybrid density-functional theory
DEFF Research Database (Denmark)
Sharkas, Kamal; Savin, Andreas; Jensen, Hans Jørgen Aagaard
2012-01-01
We propose a multiconfigurational hybrid density-functional theory which rigorously combines a multiconfiguration self-consistent-field calculation with a density-functional approximation based on a linear decomposition of the electron-electron interaction. This gives a straightforward extension ...
Empirical projection-based basis-component decomposition method
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)
Energy Technology Data Exchange (ETDEWEB)
Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1962-06-15
The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La
Lie bialgebras with triangular decomposition
International Nuclear Information System (INIS)
Andruskiewitsch, N.; Levstein, F.
1992-06-01
Lie bialgebras originated in a triangular decomposition of the underlying Lie algebra are discussed. The explicit formulas for the quantization of the Heisenberg Lie algebra and some motion Lie algebras are given, as well as the algebra of rational functions on the quantum Heisenberg group and the formula for the universal R-matrix. (author). 17 refs
Decomposition of metal nitrate solutions
International Nuclear Information System (INIS)
Haas, P.A.; Stines, W.B.
1982-01-01
Oxides in powder form are obtained from aqueous solutions of one or more heavy metal nitrates (e.g. U, Pu, Th, Ce) by thermal decomposition at 300 to 800 deg C in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal. (author)
Probability inequalities for decomposition integrals
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2017-01-01
Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf
Thermal decomposition of ammonium hexachloroosmate
DEFF Research Database (Denmark)
Asanova, T I; Kantor, Innokenty; Asanov, I. P.
2016-01-01
Structural changes of (NH4)2[OsCl6] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH4)2[OsCl6] transforms directly to meta...
High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code
International Nuclear Information System (INIS)
Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.
2011-01-01
APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)
Hybrid spectral CT reconstruction.
Directory of Open Access Journals (Sweden)
Darin P Clark
Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with
Hybrid spectral CT reconstruction
Clark, Darin P.
2017-01-01
Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral
DEFF Research Database (Denmark)
Schraefel, M. C.; Rouncefield, Mark; Kellogg, Wendy
2012-01-01
In CSCW, how much do we need to know about another domain/culture before we observe, intersect and intervene with designs. What optimally would that other culture need to know about us? Is this a “how long is a piece of string” question, or an inquiry where we can consider a variety of contexts a...
Investigating hydrogel dosimeter decomposition by chemical methods
International Nuclear Information System (INIS)
Jordan, Kevin
2015-01-01
The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products
DEFF Research Database (Denmark)
Hjorth, Theis Solberg; Torbensen, Rune
2012-01-01
remote access via IP-based devices such as smartphones. The Trusted Domain platform fits existing legacy technologies by managing their interoperability and access controls, and it seeks to avoid the security issues of relying on third-party servers outside the home. It is a distributed system...... of wireless standards, limited resources of embedded systems, etc. Taking these challenges into account, we present a Trusted Domain home automation platform, which dynamically and securely connects heterogeneous networks of Short-Range Wireless devices via simple non-expert user. interactions, and allows......In the digital age of home automation and with the proliferation of mobile Internet access, the intelligent home and its devices should be accessible at any time from anywhere. There are many challenges such as security, privacy, ease of configuration, incompatible legacy devices, a wealth...
Inference in hybrid Bayesian networks
International Nuclear Information System (INIS)
Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio
2009-01-01
Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.
Dictionary-Based Tensor Canonical Polyadic Decomposition
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Decomposition of diesel oil by various microorganisms
Energy Technology Data Exchange (ETDEWEB)
Suess, A; Netzsch-Lehner, A
1969-01-01
Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Energy Technology Data Exchange (ETDEWEB)
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Excimer laser decomposition of silicone
International Nuclear Information System (INIS)
Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.
2003-01-01
Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect
rCUR: an R package for CUR matrix decomposition
Directory of Open Access Journals (Sweden)
Bodor András
2012-05-01
Full Text Available Abstract Background Many methods for dimensionality reduction of large data sets such as those generated in microarray studies boil down to the Singular Value Decomposition (SVD. Although singular vectors associated with the largest singular values have strong optimality properties and can often be quite useful as a tool to summarize the data, they are linear combinations of up to all of the data points, and thus it is typically quite hard to interpret those vectors in terms of the application domain from which the data are drawn. Recently, an alternative dimensionality reduction paradigm, CUR matrix decompositions, has been proposed to address this problem and has been applied to genetic and internet data. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Since they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn. Results We present an implementation to perform CUR matrix decompositions, in the form of a freely available, open source R-package called rCUR. This package will help users to perform CUR-based analysis on large-scale data, such as those obtained from different high-throughput technologies, in an interactive and exploratory manner. We show two examples that illustrate how CUR-based techniques make it possible to reduce significantly the number of probes, while at the same time maintaining major trends in data and keeping the same classification accuracy. Conclusions The package rCUR provides functions for the users to perform CUR-based matrix decompositions in the R environment. In gene expression studies, it gives an additional way of analysis of differential expression and discriminant gene selection based on the use of statistical leverage scores. These scores, which have been used historically in diagnostic regression
Physics-based hybrid method for multiscale transport in porous media
Yousefzadeh, Mehrdad; Battiato, Ilenia
2017-09-01
Despite advancements in the development of multiscale models for flow and reactive transport in porous media, the accurate, efficient and physics-based coupling of multiple scales in hybrid models remains a major theoretical and computational challenge. Improving the predictivity of macroscale predictions by means of multiscale algorithms relative to classical at-scale models is the primary motivation for the development of multiscale simulators. Yet, very few are the quantitative studies that explicitly address the predictive capability of multiscale coupling algorithms as it is still generally not possible to have a priori estimates of the errors that are present when complex flow processes are modeled. We develop a nonintrusive pore-/continuum-scale hybrid model whose coupling error is bounded by the upscaling error, i.e. we build a predictive tightly coupled multiscale scheme. This is accomplished by slightly enlarging the subdomain where continuum-scale equations are locally invalid and analytically defining physics-based coupling conditions at the interfaces separating the two computational sub-domains, while enforcing state variable and flux continuity. The proposed multiscale coupling approach retains the advantages of domain decomposition approaches, including the use of existing solvers for each subdomain, while it gains flexibility in the choice of the numerical discretization method and maintains the coupling errors bounded by the upscaling error. We implement the coupling in finite volumes and test the proposed method by modeling flow and transport through a reactive channel and past an array of heterogeneously reactive cylinders.
Dynamics and control of hybrid mechanical systems
Leonov, G.A.; Nijmeijer, H.; Pogromski, A.Y.; Fradkov, A.L.
2010-01-01
The papers in this edited volume aim to provide a better understanding of the dynamics and control of a large class of hybrid dynamical systems that are described by different models in different state space domains. They not only cover important aspects and tools for hybrid systems analysis and
Thermic decomposition of biphenyl; Decomposition thermique du biphenyle
Energy Technology Data Exchange (ETDEWEB)
Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1966-03-01
Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase
International Nuclear Information System (INIS)
Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo
2017-01-01
Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.
Renormalization-group theory of spinodal decomposition
International Nuclear Information System (INIS)
Mazenko, G.F.; Valls, O.T.; Zhang, F.C.
1985-01-01
Renormalization-group (RG) methods developed previously for the study of the growth of order in unstable systems are extended to treat the spinodal decomposition of the two-dimensional spin-exchange kinetic Ising model. The conservation of the order parameter and fixed-length sum rule are properly preserved in the theory. Various correlation functions in both coordinate and momentum space are calculated as functions of time. The scaling function for the structure factor is extracted. We compare our results with direct Monte Carlo (MC) simulations and find them in good agreement. The time rescaling parameter entering the RG analysis is temperature dependent, as was determined in previous work through a RG analysis of MC simulations. The results exhibit a long-time logarithmic growth law for the typical domain size, both analytically and numerically. In the time region where MC simulations have previously been performed, the logarithmic growth law can be fitted to a power law with an effective exponent. This exponent is found to be in excellent agreement with the result of MC simulations. The logarithmic growth law agrees with a physical model of interfacial motion which involves an interplay between the local curvature and an activated jump across the interface
An operational modal analysis method in frequency and spatial domain
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Dolomite decomposition under CO2
International Nuclear Information System (INIS)
Guerfa, F.; Bensouici, F.; Barama, S.E.; Harabi, A.; Achour, S.
2004-01-01
Full text.Dolomite (MgCa (CO 3 ) 2 is one of the most abundant mineral species on the surface of the planet, it occurs in sedimentary rocks. MgO, CaO and Doloma (Phase mixture of MgO and CaO, obtained from the mineral dolomite) based materials are attractive steel-making refractories because of their potential cost effectiveness and world wide abundance more recently, MgO is also used as protective layers in plasma screen manufacture ceel. The crystal structure of dolomite was determined as rhombohedral carbonates, they are layers of Mg +2 and layers of Ca +2 ions. It dissociates depending on the temperature variations according to the following reactions: MgCa (CO 3 ) 2 → MgO + CaO + 2CO 2 .....MgCa (CO 3 ) 2 → MgO + Ca + CaCO 3 + CO 2 .....This latter reaction may be considered as a first step for MgO production. Differential thermal analysis (DTA) are used to control dolomite decomposition and the X-Ray Diffraction (XRD) was used to elucidate thermal decomposition of dolomite according to the reaction. That required samples were heated to specific temperature and holding times. The average particle size of used dolomite powders is 0.3 mm, as where, the heating temperature was 700 degree celsius, using various holding times (90 and 120 minutes). Under CO 2 dolomite decomposed directly to CaCO 3 accompanied by the formation of MgO, no evidence was offered for the MgO formation of either CaO or MgCO 3 , under air, simultaneous formation of CaCO 3 , CaO and accompanied dolomite decomposition
Spectral Tensor-Train Decomposition
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...... adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online (http://pypi.python.org/pypi/TensorToolbox/)....
Decomposition of Multi-player Games
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
Constructive quantum Shannon decomposition from Cartan involutions
International Nuclear Information System (INIS)
Drury, Byron; Love, Peter
2008-01-01
The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions
Constructive quantum Shannon decomposition from Cartan involutions
Energy Technology Data Exchange (ETDEWEB)
Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu
2008-10-03
The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.
Mathematical modelling of the decomposition of explosives
International Nuclear Information System (INIS)
Smirnov, Lev P
2010-01-01
Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.
International Nuclear Information System (INIS)
Moir, R.W.
1980-01-01
The rationale for hybrid fusion-fission reactors is the production of fissile fuel for fission reactors. A new class of reactor, the fission-suppressed hybrid promises unusually good safety features as well as the ability to support 25 light-water reactors of the same nuclear power rating, or even more high-conversion-ratio reactors such as the heavy-water type. One 4000-MW nuclear hybrid can produce 7200 kg of 233 U per year. To obtain good economics, injector efficiency times plasma gain (eta/sub i/Q) should be greater than 2, the wall load should be greater than 1 MW.m -2 , and the hybrid should cost less than 6 times the cost of a light-water reactor. Introduction rates for the fission-suppressed hybrid are usually rapid
Decomposition in pelagic marine ecosytems
International Nuclear Information System (INIS)
Lucas, M.I.
1986-01-01
During the decomposition of plant detritus, complex microbial successions develop which are dominated in the early stages by a number of distinct bacterial morphotypes. The microheterotrophic community rapidly becomes heterogenous and may include cyanobacteria, fungi, yeasts and bactivorous protozoans. Microheterotrophs in the marine environment may have a biomass comparable to that of all other heterotrophs and their significance as a resource to higher trophic orders, and in the regeneration of nutrients, particularly nitrogen, that support 'regenerated' primary production, has aroused both attention and controversy. Numerous methods have been employed to measure heterotrophic bacterial production and activity. The most widely used involve estimates of 14 C-glucose uptake; the frequency of dividing cells; the incorporation of 3 H-thymidine and exponential population growth in predator-reduced filtrates. Recent attempts to model decomposition processes and C and N fluxes in pelagic marine ecosystems are described. This review examines the most sensitive components and predictions of the models with particular reference to estimates of bacterial production, net growth yield and predictions of N cycling determined by 15 N methodology. Directed estimates of nitrogen (and phosphorus) flux through phytoplanktonic and bacterioplanktonic communities using 15 N (and 32 P) tracer methods are likely to provide more realistic measures of nitrogen flow through planktonic communities
Infrared multiphoton absorption and decomposition
International Nuclear Information System (INIS)
Evans, D.K.; McAlpine, R.D.
1984-01-01
The discovery of infrared laser induced multiphoton absorption (IRMPA) and decomposition (IRMPD) by Isenor and Richardson in 1971 generated a great deal of interest in these phenomena. This interest was increased with the discovery by Ambartzumian, Letokhov, Ryadbov and Chekalin that isotopically selective IRMPD was possible. One of the first speculations about these phenomena was that it might be possible to excite a particular mode of a molecule with the intense infrared laser beam and cause decomposition or chemical reaction by channels which do not predominate thermally, thus providing new synthetic routes for complex chemicals. The potential applications to isotope separation and novel chemistry stimulated efforts to understand the underlying physics and chemistry of these processes. At ICOMP I, in 1977 and at ICOMP II in 1980, several authors reviewed the current understandings of IRMPA and IRMPD as well as the particular aspect of isotope separation. There continues to be a great deal of effort into understanding IRMPA and IRMPD and we will briefly review some aspects of these efforts with particular emphasis on progress since ICOMP II. 31 references
Decomposition of Diethylstilboestrol in Soil
DEFF Research Database (Denmark)
Gregers-Hansen, Birte
1964-01-01
The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after the e...... not inhibit the CO2 production from the soil.Experiments with γ-sterilized soil indicated that enzymes present in the soil are able to attack DES.......The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after...
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Directory of Open Access Journals (Sweden)
Duo Hao
2017-11-01
Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
Efficient Delaunay Tessellation through K-D Tree Decomposition
Energy Technology Data Exchange (ETDEWEB)
Morozov, Dmitriy; Peterka, Tom
2017-08-21
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluate the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.
General Services Administration — This dataset offers the list of all .gov domains, including state, local, and tribal .gov domains. It does not include .mil domains, or other federal domains outside...
Non-equilibrium theory of arrested spinodal decomposition
Energy Technology Data Exchange (ETDEWEB)
Olais-Govea, José Manuel; López-Flores, Leticia; Medina-Noyola, Magdaleno [Instituto de Física “Manuel Sandoval Vallarta,” Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000 San Luis Potosí, SLP (Mexico)
2015-11-07
The non-equilibrium self-consistent generalized Langevin equation theory of irreversible relaxation [P. E. Ramŕez-González and M. Medina-Noyola, Phys. Rev. E 82, 061503 (2010); 82, 061504 (2010)] is applied to the description of the non-equilibrium processes involved in the spinodal decomposition of suddenly and deeply quenched simple liquids. For model liquids with hard-sphere plus attractive (Yukawa or square well) pair potential, the theory predicts that the spinodal curve, besides being the threshold of the thermodynamic stability of homogeneous states, is also the borderline between the regions of ergodic and non-ergodic homogeneous states. It also predicts that the high-density liquid-glass transition line, whose high-temperature limit corresponds to the well-known hard-sphere glass transition, at lower temperature intersects the spinodal curve and continues inside the spinodal region as a glass-glass transition line. Within the region bounded from below by this low-temperature glass-glass transition and from above by the spinodal dynamic arrest line, we can recognize two distinct domains with qualitatively different temperature dependence of various physical properties. We interpret these two domains as corresponding to full gas-liquid phase separation conditions and to the formation of physical gels by arrested spinodal decomposition. The resulting theoretical scenario is consistent with the corresponding experimental observations in a specific colloidal model system.
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
Decomposition kinetics of plutonium hydride
Energy Technology Data Exchange (ETDEWEB)
Haschke, J.M.; Stakebake, J.L.
1979-01-01
Kinetic data for decomposition of PuH/sub 1/ /sub 95/ provides insight into a possible mechanism for the hydriding and dehydriding reactions of plutonium. The fact that the rate of the hydriding reaction, K/sub H/, is proportional to P/sup 1/2/ and the rate of the dehydriding process, K/sub D/, is inversely proportional to P/sup 1/2/ suggests that the forward and reverse reactions proceed by opposite paths of the same mechanism. The P/sup 1/2/ dependence of hydrogen solubility in metals is characteristic of the dissociative absorption of hydrogen; i.e., the reactive species is atomic hydrogen. It is reasonable to assume that the rates of the forward and reverse reactions are controlled by the surface concentration of atomic hydrogen, (H/sub s/), that K/sub H/ = c'(H/sub s/), and that K/sub D/ = c/(H/sub s/), where c' and c are proportionality constants. For this surface model, the pressure dependence of K/sub D/ is related to (H/sub s/) by the reaction (H/sub s/) reversible 1/2H/sub 2/(g) and by its equilibrium constant K/sub e/ = (H/sub 2/)/sup 1/2//(H/sub s/). In the pressure range of ideal gas behavior, (H/sub s/) = K/sub e//sup -1/(RT)/sup -1/2/ and the decomposition rate is given by K/sub D/ = cK/sub e/(RT)/sup -1/2/P/sup 1/2/. For an analogous treatment of the hydriding process with this model, it can be readily shown that K/sub H/ = c'K/sub e//sup -1/(RT)/sup -1/2/P/sup 1/2/. The inverse pressure dependence and direct temperature dependence of the decomposition rate are correctly predicted by this mechanism which is most consistent with the observed behavior of the Pu--H system.
A hybrid numerical method for orbit correction
International Nuclear Information System (INIS)
White, G.; Himel, T.; Shoaee, H.
1997-09-01
The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings
Early stage litter decomposition across biomes
Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et
2018-01-01
Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...
Nutrient Dynamics and Litter Decomposition in Leucaena ...
African Journals Online (AJOL)
Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...
Climate history shapes contemporary leaf litter decomposition
Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford
2015-01-01
Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...
The decomposition of estuarine macrophytes under different ...
African Journals Online (AJOL)
The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...
Decomposition and flame structure of hydrazinium nitroformate
Louwers, J.; Parr, T.; Hanson-Parr, D.
1999-01-01
The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The
Multilevel index decomposition analysis: Approaches and application
International Nuclear Information System (INIS)
Xu, X.Y.; Ang, B.W.
2014-01-01
With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate
Jodin, Gurvan; Scheller, Johannes; Rouchon, Jean-François; Braza, Marianna; Mit Collaboration; Imft Collaboration; Laplace Collaboration
2016-11-01
A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is performed. Particle image velocimetry, as well as pressure and aerodynamic force measurements, are carried out on an airfoil model. This hybrid morphing wing model is equipped with both trailing edge piezoelectric-actuators and camber control shape memory alloy actuators. It will be shown that this actuation allows for an effective manipulation of the wake turbulent structures. Frequency domain analysis and proper orthogonal decomposition show that proper actuating reduces the energy dissipation by favoring more coherent vortical structures. This modification in the airflow dynamics eventually allows for a tapering of the wake thickness compared to the baseline configuration. Hence, drag reductions relative to the non-actuated trailing edge configuration are observed. Massachusetts Institute of Technology.
DEFF Research Database (Denmark)
Schaub Jr, Gary John; Murphy, Martin; Hoffman, Frank
2017-01-01
Russia’s use of hybrid warfare techniques has raised concerns about the security of the Baltic States. Gary Schaub, Jr, Martin Murphy and Frank G Hoffman recommend a series of measures to augment NATO’s Readiness Action Plan in the Baltic region, including increasing the breadth and depth of naval...... exercises, and improving maritime domain awareness through cooperative programmes. They also suggest unilateral and cooperative measures to develop a sound strategic communications strategy to counter Moscow’s information operations, reduce dependence on Russian energy supplies and build the resilience...
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)
2014-05-15
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
CSIR Research Space (South Africa)
Jacob John, Maya
2009-04-01
Full Text Available mixed short sisal/glass hybrid fibre reinforced low density polyethylene composites was investigated by Kalaprasad et al [25].Chemical surface modifications such as alkali, acetic anhydride, stearic acid, permanganate, maleic anhydride, silane...
In situ study of glasses decomposition layer
International Nuclear Information System (INIS)
Zarembowitch-Deruelle, O.
1997-01-01
The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)
Multilinear operators for higher-order decompositions.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
Cetorelli, Nicola
2014-01-01
I introduce the concept of hybrid intermediaries: financial conglomerates that control a multiplicity of entity types active in the "assembly line" process of modern financial intermediation, a system that has become known as shadow banking. The complex bank holding companies of today are the best example of hybrid intermediaries, but I argue that financial firms from the "nonbank" space can just as easily evolve into conglomerates with similar organizational structure, thus acquiring the cap...
Management intensity alters decomposition via biological pathways
Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory
2011-01-01
Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future
Parallel Algorithms for Graph Optimization using Tree Decompositions
Energy Technology Data Exchange (ETDEWEB)
Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL
2012-06-01
Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Parallel pic plasma simulation through particle decomposition techniques
International Nuclear Information System (INIS)
Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'
1998-02-01
Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it
LMDI decomposition approach: A guide for implementation
International Nuclear Information System (INIS)
Ang, B.W.
2015-01-01
Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.
Thermal decomposition of beryllium perchlorate tetrahydrate
International Nuclear Information System (INIS)
Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.
1975-01-01
Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing
Thermal decomposition of lanthanide and actinide tetrafluorides
International Nuclear Information System (INIS)
Gibson, J.K.; Haire, R.G.
1988-01-01
The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs
Decomposition of lake phytoplankton. 1
International Nuclear Information System (INIS)
Hansen, L.; Krog, G.F.; Soendergaard, M.
1986-01-01
Short-time (24 h) and long-time (4-6 d) decomposition of phytoplankton cells were investigasted under in situ conditions in four Danish lakes. Carbon-14-labelled, dead algae were exposed to sterile or natural lake water and the dynamics of cell lysis and bacterial utilization of the leached products were followed. The lysis process was dominated by an initial fast water extraction. Within 2 to 4 h from 4 to 34% of the labelled carbon leached from the algal cells. After 24 h from 11 to 43% of the initial particulate carbon was found as dissolved carbon in the experiments with sterile lake water; after 4 to 6 d the leaching was from 67 to 78% of the initial 14 C. The leached compounds were utilized by bacteria. A comparison of the incubations using sterile and natural water showed that a mean of 71% of the lysis products was metabolized by microorganisms within 24 h. In two experiments the uptake rate equalled the leaching rate. (author)
Decomposition of lake phytoplankton. 2
International Nuclear Information System (INIS)
Hansen, L.; Krog, G.F.; Soendergaard, M.
1986-01-01
The lysis process of phytoplankton was followed in 24 h incubations in three Danish lakes. By means of gel-chromatography it was shown that the dissolved carbon leaching from different algal groups differed in molecular weight composition. Three distinct molecular weight classes (>10,000; 700 to 10,000 and < 700 Daltons) leached from blue-green algae in almost equal proportion. The lysis products of spring-bloom diatoms included only the two smaller size classes, and the molecules between 700 and 10,000 Daltons dominated. Measurements of cell content during decomposition of the diatoms revealed polysaccharides and low molecular weight compounds to dominate the lysis products. No proteins were leached during the first 24 h after cell death. By incubating the dead algae in natural lake water, it was possible to detect a high bacterial affinity towards molecules between 700 and 10,000 Daltons, although the other size classes were also utilized. Bacterial transformation of small molecules to larger molecules could be demonstrated. (author)
Kinetics and hybrid kinetic-fluid models for nonequilibrium gas and plasmas
International Nuclear Information System (INIS)
Crouseilles, N.
2004-12-01
For a few decades, the application of the physics of plasmas has appeared in different fields like laser-matter interaction, astrophysics or thermonuclear fusion. In this thesis, we are interested in the modeling and the numerical study of nonequilibrium gas and plasmas. To describe such systems, two ways are usually used: the fluid description and the kinetic description. When we study a nonequilibrium system, fluid models are not sufficient and a kinetic description have to be used. However, solving a kinetic model requires the discretization of a large number of variables, which is quite expensive from a numerical point of view. The aim of this work is to propose a hybrid kinetic-fluid model thanks to a domain decomposition method in the velocity space. The derivation of the hybrid model is done in two different contexts: the rarefied gas context and the more complicated plasmas context. The derivation partly relies on Levermore's entropy minimization approach. The so-obtained model is then discretized and validated on various numerical test cases. In a second stage, a numerical study of a fully kinetic model is presented. A collisional plasma constituted of electrons and ions is considered through the Vlasov-Poisson-Fokker-Planck-Landau equation. Then, a numerical scheme which preserves total mass and total energy is presented. This discretization permits in particular a numerical study of the Landau damping. (author)
Hybrid parallel strategy for the simulation of fast transient accidental situations at reactor scale
International Nuclear Information System (INIS)
Faucher, V.; Galon, P.; Beccantini, A.; Crouzet, F.; Debaud, F.; Gautier, T.
2015-01-01
Highlights: • Reference accidental situations for current and future reactors are considered. • They require the modeling of complex fluid–structure systems at full reactor scale. • EPX software computes the non-linear transient solution with explicit time stepping. • Focus on the parallel hybrid solver specific to the proposed coupled equations. - Abstract: This contribution is dedicated to the latest methodological developments implemented in the fast transient dynamics software EUROPLEXUS (EPX) to simulate the mechanical response of fully coupled fluid–structure systems to accidental situations to be considered at reactor scale, among which the Loss of Coolant Accident, the Core Disruptive Accident and the Hydrogen Explosion. Time integration is explicit and the search for reference solutions within the safety framework prevents any simplification and approximations in the coupled algorithm: for instance, all kinematic constraints are dealt with using Lagrange Multipliers, yielding a complex flow chart when non-permanent constraints such as unilateral contact or immersed fluid–structure boundaries are considered. The parallel acceleration of the solution process is then achieved through a hybrid approach, based on a weighted domain decomposition for distributed memory computing and the use of the KAAPI library for self-balanced shared memory processing inside subdomains
A Decomposition Theorem for Finite Automata.
Santa Coloma, Teresa L.; Tucci, Ralph P.
1990-01-01
Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory
Lucia, David J.; Beran, Philip S.; Silva, Walter A.
2003-01-01
This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.
Joint Matrices Decompositions and Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.
2014-01-01
Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf
Review on Thermal Decomposition of Ammonium Nitrate
Chaturvedi, Shalini; Dave, Pragnesh N.
2013-01-01
In this review data from the literature on thermal decomposition of ammonium nitrate (AN) and the effect of additives to their thermal decomposition are summarized. The effect of additives like oxides, cations, inorganic acids, organic compounds, phase-stablized CuO, etc., is discussed. The effect of an additive mainly occurs at the exothermic peak of pure AN in a temperature range of 200°C to 140°C.
Note on Symplectic SVD-Like Decomposition
Directory of Open Access Journals (Sweden)
AGOUJIL Said
2016-02-01
Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.
Interdiffusion and Spinodal Decomposition in Electrically Conducting Polymer Blends
Directory of Open Access Journals (Sweden)
Antti Takala
2015-08-01
Full Text Available The impact of phase morphology in electrically conducting polymer composites has become essential for the efficiency of the various functional applications, in which the continuity of the electroactive paths in multicomponent systems is essential. For instance in bulk heterojunction organic solar cells, where the light-induced electron transfer through photon absorption creating excitons (electron-hole pairs, the control of diffusion of the spatially localized excitons and their dissociation at the interface and the effective collection of holes and electrons, all depend on the surface area, domain sizes, and connectivity in these organic semiconductor blends. We have used a model semiconductor polymer blend with defined miscibility to investigate the phase separation kinetics and the formation of connected pathways. Temperature jump experiments were applied from a miscible region of semiconducting poly(alkylthiophene (PAT blends with ethylenevinylacetate-elastomers (EVA and the kinetics at the early stages of phase separation were evaluated in order to establish bicontinuous phase morphology via spinodal decomposition. The diffusion in the blend was followed by two methods: first during a miscible phase separating into two phases: from the measurement of the spinodal decomposition. Secondly the diffusion was measured by monitoring the interdiffusion of PAT film into the EVA film at elected temperatures and eventually compared the temperature dependent diffusion characteristics. With this first quantitative evaluation of the spinodal decomposition as well as the interdiffusion in conducting polymer blends, we show that a systematic control of the phase separation kinetics in a polymer blend with one of the components being electrically conducting polymer can be used to optimize the morphology.
Microbiological decomposition of bagasse after radiation pasteurization
International Nuclear Information System (INIS)
Ito, Hitoshi; Ishigaki, Isao
1987-01-01
Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)
Decomposition of tetrachloroethylene by ionizing radiation
International Nuclear Information System (INIS)
Hakoda, T.; Hirota, K.; Hashimoto, S.
1998-01-01
Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation
Microbiological decomposition of bagasse after radiation pasteurization
Energy Technology Data Exchange (ETDEWEB)
Ito, Hitoshi; Ishigaki, Isao
1987-11-01
Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.
A thermodynamic definition of protein domains.
Porter, Lauren L; Rose, George D
2012-06-12
Protein domains are conspicuous structural units in globular proteins, and their identification has been a topic of intense biochemical interest dating back to the earliest crystal structures. Numerous disparate domain identification algorithms have been proposed, all involving some combination of visual intuition and/or structure-based decomposition. Instead, we present a rigorous, thermodynamically-based approach that redefines domains as cooperative chain segments. In greater detail, most small proteins fold with high cooperativity, meaning that the equilibrium population is dominated by completely folded and completely unfolded molecules, with a negligible subpopulation of partially folded intermediates. Here, we redefine structural domains in thermodynamic terms as cooperative folding units, based on m-values, which measure the cooperativity of a protein or its substructures. In our analysis, a domain is equated to a contiguous segment of the folded protein whose m-value is largely unaffected when that segment is excised from its parent structure. Defined in this way, a domain is a self-contained cooperative unit; i.e., its cooperativity depends primarily upon intrasegment interactions, not intersegment interactions. Implementing this concept computationally, the domains in a large representative set of proteins were identified; all exhibit consistency with experimental findings. Specifically, our domain divisions correspond to the experimentally determined equilibrium folding intermediates in a set of nine proteins. The approach was also proofed against a representative set of 71 additional proteins, again with confirmatory results. Our reframed interpretation of a protein domain transforms an indeterminate structural phenomenon into a quantifiable molecular property grounded in solution thermodynamics.
Indian Academy of Sciences (India)
Hybrid stars. AsHOK GOYAL. Department of Physics and Astrophysics, University of Delhi, Delhi 110 007, India. Abstract. Recently there have been important developments in the determination of neutron ... number and the electric charge. ... available to the system to rearrange concentration of charges for a given fraction of.
Collective identity formation in hybrid organizations.
Boulongne , Romain; Boxenbaum , Eva
2015-01-01
International audience; The present article examines the process of collective identity formation in the context of hybrid organizing. Empirically, we investigate hybrid organizing in a collaborative structure at the interface of two heterogeneous organizations in the domain of new renewable energies. We draw on the literature on knowledge sharing across organizational boundaries, particularly the notions of transfer, translation and transformation, to examine in real time how knowledge shari...
Effect of sulfation on the surface activity of CaO for N2O decomposition
International Nuclear Information System (INIS)
Wu, Lingnan; Hu, Xiaoying; Qin, Wu; Dong, Changqing; Yang, Yongping
2015-01-01
Graphical abstract: - Highlights: • Sulfation of CaO (1 0 0) surface greatly deactivates its surface activity for N 2 O decomposition. • An increase of sulfation degree leads to a decrease of CaO surface activity for N 2 O decomposition. • Sulfation from CaSO 3 into CaSO 4 is the crucial step for deactivating the surface activity for N 2 O decomposition. • The electronic interaction CaO (1 0 0)/CaSO 4 (0 0 1) interface is limited to the bottom layer of CaSO 4 (0 0 1) and the top layer of CaO (1 0 0). • CaSO 4 (0 0 1) and (0 1 0) surfaces show negligible catalytic ability for N 2 O decomposition. - Abstract: Limestone addition to circulating fluidized bed boilers for sulfur removal affects nitrous oxide (N 2 O) emission at the same time, but mechanism of how sulfation process influences the surface activity of CaO for N 2 O decomposition remains unclear. In this paper, we investigated the effect of sulfation on the surface properties and catalytic activity of CaO for N 2 O decomposition using density functional theory calculations. Sulfation of CaO (1 0 0) surface by the adsorption of a single gaseous SO 2 or SO 3 molecule forms stable local CaSO 3 or CaSO 4 on the CaO (1 0 0) surface with strong hybridization between the S atom of SO x and the surface O anion. The formed local CaSO 3 increases the barrier energy of N 2 O decomposition from 0.989 eV (on the CaO (1 0 0) surface) to 1.340 eV, and further sulfation into local CaSO 4 remarkably increases the barrier energy to 2.967 eV. Sulfation from CaSO 3 into CaSO 4 is therefore the crucial step for deactivating the surface activity for N 2 O decomposition. Completely sulfated CaSO 4 (0 0 1) and (0 1 0) surfaces further validate the negligible catalytic ability of CaSO 4 for N 2 O decomposition.
Dynamic mode decomposition for plasma diagnostics and validation
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Gravity localization on hybrid branes
Directory of Open Access Journals (Sweden)
D.F.S. Veras
2016-03-01
Full Text Available This work deals with gravity localization on codimension-1 brane worlds engendered by compacton-like kinks, the so-called hybrid branes. In such scenarios, the thin brane behavior is manifested when the extra dimension is outside the compact domain, where the energy density is non-trivial, instead of asymptotically as in the usual thick brane models. The zero mode is trapped in the brane, as required. The massive modes, although not localized in the brane, have important phenomenological implications such as corrections to the Newton's law. We study such corrections in the usual thick domain wall and in the hybrid brane scenarios. By means of suitable numerical methods, we attain the mass spectrum for the graviton and the corresponding wavefunctions. The spectra possess the usual linearly increasing behavior from the Kaluza–Klein theories. Further, we show that the 4D gravitational force is slightly increased at short distances. The first eigenstate contributes highly for the correction to the Newton's law. The subsequent normalized solutions have diminishing contributions. Moreover, we find out that the phenomenology of the hybrid brane is not different from the usual thick domain wall. The use of numerical techniques for solving the equations of the massive modes is useful for matching possible phenomenological measurements in the gravitational law as a probe to warped extra dimensions.
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Automatic classification of visual evoked potentials based on wavelet decomposition
Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz
2017-04-01
Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.
Aridity and decomposition processes in complex landscapes
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally
Decomposition of forest products buried in landfills
International Nuclear Information System (INIS)
Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.
2013-01-01
Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than
Decomposition of forest products buried in landfills
Energy Technology Data Exchange (ETDEWEB)
Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)
2013-11-15
Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than
Bertrand, G.; Comperat, M.; Lallemant, M.
1980-09-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with (110) crystallographic orientation. Temperature and pressure conditions were selected so as to obtain elliptical trihydrate domains. The study deals with the evolution, vs time, of elliptical domain dimensions and the evolution, vs water vapor pressure, of the {D}/{d} ratio of ellipse axes and on the other hand of the interface displacement rate along a given direction. The phenomena observed are not basically different from those yielded by the overall kinetic study of the solid sample. Their magnitude, however, is modulated depending on displacement direction. The results are analyzed within the scope of our study of endothermic decomposition of solids.
Directory of Open Access Journals (Sweden)
Sheng-Ping Yan
2014-01-01
Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent
Czech Academy of Sciences Publication Activity Database
Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.
2008-01-01
Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Lee, Jeong-Hyun; Lee, Seung-Hwan
2016-10-26
The hybrid supercapacitor using carbon/AlPO 4 hybrid-coated H 2 Ti 12 O 25 /activated carbon is fabricated as a cylindrical cell and investigated against electrochemical performances. The hybrid coating shows that the conductivity for the electron and Li ion is superior and it prevented active material from HF attack. Consequently, carbon/AlPO 4 hybrid-coated H 2 Ti 12 O 25 shows enhanced rate capability and long-term cycle life. Also, the hybrid coating inhibits swelling phenomenon caused by gas generated as decomposition reaction of electrolyte. Therefore, the hybrid supercapacitor using carbon/AlPO 4 hybrid-coated H 2 Ti 12 O 25 /activated carbon can be applied to an energy storage system that requires a long-term life.
Steganography based on pixel intensity value decomposition
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Microbial Signatures of Cadaver Gravesoil During Decomposition.
Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T
2016-04-01
Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.
Thermal decomposition process of silver behenate
International Nuclear Information System (INIS)
Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang
2006-01-01
The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles
Radiolytic decomposition of 4-bromodiphenyl ether
International Nuclear Information System (INIS)
Tang Liang; Xu Gang; Wu Wenjing; Shi Wenyan; Liu Ning; Bai Yulei; Wu Minghong
2010-01-01
Polybrominated diphenyl ethers (PBDEs) spread widely in the environment are mainly removed by photochemical and anaerobic microbial degradation. In this paper, the decomposition of 4-bromodiphenyl ether (BDE -3), the PBDEs homologues, is investigated by electron beam irradiation of its ethanol/water solution (reduction system) and acetonitrile/water solution (oxidation system). The radiolytic products were determined by GC coupled with electron capture detector, and the reaction rate constant of e sol - in the reduction system was measured at 2.7 x 10 10 L · mol -1 · s -1 by pulsed radiolysis. The results show that the BDE-3 concentration affects strongly the decomposition ratio in the alkali solution, and the reduction system has a higher BDE-3 decomposition rate than the oxidation system. This indicates that the BDE-3 was reduced by effectively capturing e sol - in radiolytic process. (authors)
Parallel processing for pitch splitting decomposition
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
Thermal Plasma decomposition of fluoriated greenhouse gases
Energy Technology Data Exchange (ETDEWEB)
Choi, Soo Seok; Watanabe, Takayuki [Tokyo Institute of Technology, Yokohama (Japan); Park, Dong Wha [Inha University, Incheon (Korea, Republic of)
2012-02-15
Fluorinated compounds mainly used in the semiconductor industry are potent greenhouse gases. Recently, thermal plasma gas scrubbers have been gradually replacing conventional burn-wet type gas scrubbers which are based on the combustion of fossil fuels because high conversion efficiency and control of byproduct generation are achievable in chemically reactive high temperature thermal plasma. Chemical equilibrium composition at high temperature and numerical analysis on a complex thermal flow in the thermal plasma decomposition system are used to predict the process of thermal decomposition of fluorinated gas. In order to increase economic feasibility of the thermal plasma decomposition process, increase of thermal efficiency of the plasma torch and enhancement of gas mixing between the thermal plasma jet and waste gas are discussed. In addition, noble thermal plasma systems to be applied in the thermal plasma gas treatment are introduced in the present paper.
Hydrogen peroxide decomposition kinetics in aquaculture water
DEFF Research Database (Denmark)
Arvin, Erik; Pedersen, Lars-Flemming
2015-01-01
during the HP decomposition. The model assumes that the enzyme decay is controlled by an inactivation stoichiometry related to the HP decomposition. In order to make the model easily applicable, it is furthermore assumed that the COD is a proxy of the active biomass concentration of the water and thereby......Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish...... reared or the nitrifying bacteria in the biofilters at concentrations required to eliminating pathogens. This calls for quantitative insight into the fate of the disinfectant residuals during water treatment. This paper presents a kinetic model that describes the HP decomposition in aquaculture water...
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming
2013-01-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.
2018-03-01
Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.
Separable decompositions of bipartite mixed states
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Two Notes on Discrimination and Decomposition
DEFF Research Database (Denmark)
Nielsen, Helena Skyt
1998-01-01
1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....
Gamma ray induced decomposition of lanthanide nitrates
International Nuclear Information System (INIS)
Joshi, N.G.; Garg, A.N.
1992-01-01
Gamma ray induced decomposition of the lanthanide nitrates, Ln(NO 3 ) 3 .xH 2 O where Ln=La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Tm and Yb has been studied at different absorbed doses up to 600 kGy. G(NO 2 - ) values depend on the absorbed dose and the nature of the outer cation. It has been observed that those lanthanides which exhibit variable valency (Ce and Eu) show lower G-values. An attempt has been made to correlate thermal and radiolytic decomposition processes. (author). 20 refs., 3 figs., 1 tab
Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies
Energy Technology Data Exchange (ETDEWEB)
Barnes, M.J.
1998-12-07
The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variable on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB.
Multiresolution signal decomposition transforms, subbands, and wavelets
Akansu, Ali N; Haddad, Paul R
2001-01-01
The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course
Basis of the biological decomposition of xenobiotica
International Nuclear Information System (INIS)
Mueller, R. von
1993-01-01
The ability of micro-organisms to decompose different molecules and to use them as a source of carbon, nitrogen, sulphur or energy is the basis for all biological processes for cleaning up contaminated soil. Therefore, the knowledge of these decomposition processes is an important precondition for judging which contamination can be treated biologically at all and which materials can be decomposed biologically. The decomposition schemes of the most important harmful material classes (aliphatic, aromatic and chlorinated hydrocarbons) are introduced and the consequences which arise for the practical application in biological cleaning up of contaminated soils are discussed. (orig.) [de
An investigation on thermal decomposition of DNTF-CMDB propellants
Energy Technology Data Exchange (ETDEWEB)
Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)
2007-12-15
The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)
Magnetite Dissolution Performance of HYBRID-II Decontamination Process
International Nuclear Information System (INIS)
Kim, Seonbyeong; Lee, Woosung; Won, Huijun; Moon, Jeikwon; Choi, Wangkyu
2014-01-01
In this study, we conducted the magnetite dissolution performance test of HYBRID-II (Hydrazine Based Reductive metal Ion Decontamination with sulfuric acid) as a part of decontamination process development. Decontamination performance of HYBRID process was successfully tested with the results of the acceptable decontamination factor (DF) in the previous study. While following-up studies such as the decomposition of the post-decontamination HYBRID solution and corrosion compatibility on the substrate metals of the target reactor coolant system have been continued, we also seek for an alternate version of HYBRID process suitable especially for decommissioning. Inspired by the relationship between the radius of reacting ion and the reactivity, we replaced the nitrate ion in HYBRID with bigger sulfate ion to accommodate the dissolution reaction and named HYBRID-II process. As a preliminary step for the decontamination performance, we tested the magnetite dissolution performance of developing HYBRID-II process and compared the results with those of HYBRID process. HYBRID process developed previously is known have the acceptable decontamination performance, but the relatively larger volume of secondary waste induced by anion exchange resin to treat nitrate ion is the one of the problems related in the development of HYBRID process to be applicable. Therefore we alternatively devised HYBRID-II process using sulfuric acid and tested its dissolution of magnetite in numerous conditions. From the results shown in this study, we can conclude that HYBRID-II process improves the decontamination performance and potentially reduces the volume of secondary waste. Rigorous tests with metal oxide coupons obtained from reactor coolant system will be followed to prove the robustness of HYBRID-II process in the future
DEFF Research Database (Denmark)
Against the background of increasing qualification needs there is a growing awareness of the challenge to widen participation in processes of skill formation and competence development. At the same time, the issue of permeability between vocational education and training (VET) and general education...... has turned out as a major focus of European education and training policies and certainly is a crucial principle underlying the European Qualifications Framework (EQF). In this context, «hybrid qualifications» (HQ) may be seen as an interesting approach to tackle these challenges as they serve «two...
Handschuh, Robert F. (Inventor); Roberts, Gary D. (Inventor)
2016-01-01
A hybrid gear consisting of metallic outer rim with gear teeth and metallic hub in combination with a composite lay up between the shaft interface (hub) and gear tooth rim is described. The composite lay-up lightens the gear member while having similar torque carrying capability and it attenuates the impact loading driven noise/vibration that is typical in gear systems. The gear has the same operational capability with respect to shaft speed, torque, and temperature as an all-metallic gear as used in aerospace gear design.
An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition
Mousavi, S. M.; Langston, C. A.
2016-12-01
Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http
Network topology descriptions in hybrid networks
Grosso, P.; Brown, A.; Cedeyn, A.; Dijkstra, F.; van der Ham, J.; Patil, A.; Primet, P.; Swany, M.; Zurawski, J.
2010-01-01
The NML-WG goal is to define a schema for describing topologies of hybrid networks. This schema is in first instance intended for: • lightpath provisioning applications to exchange topology information intra and inter domain; • reporting performance metrics. This document constitutes Deliverable 1
Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing
2017-12-26
As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.
Directory of Open Access Journals (Sweden)
Yuxing Li
2017-12-01
Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.
Directory of Open Access Journals (Sweden)
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
DEFF Research Database (Denmark)
Braüner, Torben
2011-01-01
Intuitionistic hybrid logic is hybrid modal logic over an intuitionistic logic basis instead of a classical logical basis. In this short paper we introduce intuitionistic hybrid logic and we give a survey of work in the area.......Intuitionistic hybrid logic is hybrid modal logic over an intuitionistic logic basis instead of a classical logical basis. In this short paper we introduce intuitionistic hybrid logic and we give a survey of work in the area....
Decomposition of jellyfish carrion in situ
DEFF Research Database (Denmark)
Chelsky, Ariella; Pitt, Kylie A.; Ferguson, Angus J.P.
2016-01-01
Jellyfish often form blooms that persist for weeks to months before they collapse en masse, resulting in the sudden release of large amounts of organic matter to the environment. This study investigated the biogeochemical and ecological effects of the decomposition of jellyfish in a shallow coast...
Compactly supported frames for decomposition spaces
DEFF Research Database (Denmark)
Nielsen, Morten; Rasmussen, Kenneth Niemann
2012-01-01
In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...
Thermal Decomposition of Aluminium Chloride Hexahydrate
Czech Academy of Sciences Publication Activity Database
Hartman, Miloslav; Trnka, Otakar; Šolcová, Olga
2005-01-01
Roč. 44, č. 17 (2005), s. 6591-6598 ISSN 0888-5885 R&D Projects: GA ČR(CZ) GA203/02/0002 Institutional research plan: CEZ:AV0Z40720504 Keywords : aluminum chloride hexahydrate * thermal decomposition * reaction kinetics Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.504, year: 2005
Preparation, Structure Characterization and Thermal Decomposition ...
African Journals Online (AJOL)
NJD
Decomposition Process of the Dysprosium(III) m-Methylbenzoate 1 ... A dinuclear complex [Dy(m-MBA)3phen]2·H2O was prepared by the reaction of DyCl3·6H2O, m-methylbenzoic acid and .... ing rate of 10 °C min–1 are illustrated in Fig. 4.
A decomposition of pairwise continuity via ideals
Directory of Open Access Journals (Sweden)
Mahes Wari
2016-02-01
Full Text Available In this paper, we introduce and study the notions of (i, j - regular - ℐ -closed sets, (i, j - Aℐ -sets, (i, j - ℐ -locally closed sets, p- Aℐ -continuous functions and p- ℐ -LC-continuous functions in ideal bitopological spaces and investigate some of their properties. Also, a new decomposition of pairwise continuity is obtained using these sets.
Nested grids ILU-decomposition (NGILU)
Ploeg, A. van der; Botta, E.F.F.; Wubs, F.W.
1996-01-01
A preconditioning technique is described which shows, in many cases, grid-independent convergence. This technique only requires an ordering of the unknowns based on the different levels of multigrid, and an incomplete LU-decomposition based on a drop tolerance. The method is demonstrated on a
A Martingale Decomposition of Discrete Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard
We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...
Triboluminescence and associated decomposition of solid methanol
International Nuclear Information System (INIS)
Trout, G.J.; Moore, D.E.; Hawke, J.G.
1975-01-01
The decomposition is initiated by the cooling of solid methanol through the β → α transiRon at 157.8K, producing the gases hydrogen, carbon monoxide, and methane. The passage through this lambda transition causes the breakup of large crystals of β-methanol into crystallites of α-methanol and is accompanied by light emission as well as decomposition. This triboluminescence is accompanied by, and apparently produced by, electrical discharges through methanol vapor in the vicinity of the solid. The potential differences needed to produce the electrical breakdown of the methanol vapor apparently arise from the disruption of the long hydrogen bonded chains of methanol molecules present in crystalline methanol. Charge separation following crystal deformation is a characteristic of substances which exhibit gas discharge triboluminescence; solid methanol has been found to emit such luminescence when mechanically deformed in the absence of the β → α transition The decomposition products are not produced directly by the breaking up of the solid methanol but from the vapor phase methanol by the electrical discharges. That gas phase decomposition does occur was confirmed by observing that the vapors of C 2 H 5 OH, CH 3 OD, and CD 3 OD decompose on being admitted to a vessel containing methanol undergoing the β → α phase transition. (U.S.)
On Orthogonal Decomposition of a Sobolev Space
Lakew, Dejenie A.
2016-01-01
The theme of this short article is to investigate an orthogonal decomposition of a Sobolev space and look at some properties of the inner product therein and the distance defined from the inner product. We also determine the dimension of the orthogonal difference space and show the expansion of spaces as their regularity increases.
TP89 - SIRZ Decomposition Spectral Estimation
Energy Technology Data Exchange (ETDEWEB)
Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.
Methodologies in forensic and decomposition microbiology
Culturable microorganisms represent only 0.1-1% of the total microbial diversity of the biosphere. This has severely restricted the ability of scientists to study the microbial biodiversity associated with the decomposition of ephemeral resources in the past. Innovations in technology are bringing...
Organic matter decomposition in simulated aquaculture ponds
Torres Beristain, B.
2005-01-01
Different kinds of organic and inorganic compounds (e.g. formulated food, manures, fertilizers) are added to aquaculture ponds to increase fish production. However, a large part of these inputs are not utilized by the fish and are decomposed inside the pond. The microbiological decomposition of the
Wood decomposition as influenced by invertebrates
Michael D. Ulyshen
2014-01-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Linear, Constant-rounds Bit-decomposition
DEFF Research Database (Denmark)
Reistad, Tord; Toft, Tomas
2010-01-01
When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ M...
Decomposition of oxalate precipitates by photochemical reaction
International Nuclear Information System (INIS)
Jae-Hyung Yoo; Eung-Ho Kim
1999-01-01
A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)
Detailed Chemical Kinetic Modeling of Hydrazine Decomposition
Meagher, Nancy E.; Bates, Kami R.
2000-01-01
The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.
Decomposition approaches to integration without a measure
Czech Academy of Sciences Publication Activity Database
Greco, S.; Mesiar, Radko; Rindone, F.; Sipeky, L.
2016-01-01
Roč. 287, č. 1 (2016), s. 37-47 ISSN 0165-0114 Institutional support: RVO:67985556 Keywords : Choquet integral * Decision making * Decomposition integral Subject RIV: BA - General Mathematics Impact factor: 2.718, year: 2016 http://library.utia.cas.cz/separaty/2016/E/mesiar-0457408.pdf
Radiolytic decomposition of dioxins in liquid wastes
International Nuclear Information System (INIS)
Zhao Changli; Taguchi, M.; Hirota, K.; Takigami, M.; Kojima, T.
2006-01-01
The dioxins including polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are some of the most toxic persistent organic pollutants. These chemicals have widely contaminated the air, water, and soil. They would accumulate in the living body through the food chains, leading to a serious public health hazard. In the present study, radiolytic decomposition of dioxins has been investigated in liquid wastes, including organic waste and waste-water. Dioxin-containing organic wastes are commonly generated in nonane or toluene. However, it was found that high radiation doses are required to completely decompose dioxins in the two solvents. The decomposition was more efficient in ethanol than in nonane or toluene. The addition of ethanol to toluene or nonane could achieve >90% decomposition of dioxins at the dose of 100 kGy. Thus, dioxin-containing organic wastes can be treated as regular organic wastes after addition of ethanol and subsequent γ-ray irradiation. On the other hand, radiolytic decomposition of dioxins easily occurred in pure-water than in waste-water, because the reaction species is largely scavenged by the dominant organic materials in waste-water. Dechlorination was not a major reaction pathway for the radiolysis of dioxin in water. In addition, radiolytic mechanism and dechlorinated pathways in liquid wastes were also discussed. (authors)
Strongly \\'etale difference algebras and Babbitt's decomposition
Tomašić, Ivan; Wibmer, Michael
2015-01-01
We introduce a class of strongly \\'{e}tale difference algebras, whose role in the study of difference equations is analogous to the role of \\'{e}tale algebras in the study of algebraic equations. We deduce an improved version of Babbitt's decomposition theorem and we present applications to difference algebraic groups and the compatibility problem.
Thermal decomposition of barium valerate in argon
DEFF Research Database (Denmark)
Torres, P.; Norby, Poul; Grivel, Jean-Claude
2015-01-01
The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...
A framework for bootstrapping morphological decomposition
CSIR Research Space (South Africa)
Joubert, LJ
2004-11-01
Full Text Available The need for a bootstrapping approach to the morphological decomposition of words in agglutinative languages such as isiZulu is motivated, and the complexities of such an approach are described. The authors then introduce a generic framework which...
A Systolic Architecture for Singular Value Decomposition,
1983-01-01
Presented at the 1 st International Colloquium on Vector and Parallel Computing in Scientific Applications, Paris, March 191J Contract N00014-82-K.0703...Gene Golub. Private comunication . given inputs x and n 2 , compute 2 2 2 2 /6/ G. H. Golub and F. T. Luk : "Singular Value I + X1 Decomposition
Direct observation of nanowire growth and decomposition
DEFF Research Database (Denmark)
Rackauskas, Simas; Shandakov, Sergey D; Jiang, Hua
2017-01-01
knowledge, so far this has been only postulated, but never observed at the atomic level. By means of in situ environmental transmission electron microscopy we monitored and examined the atomic layer transformation at the conditions of the crystal growth and its decomposition using CuO nanowires selected...
Nash-Williams’ cycle-decomposition theorem
DEFF Research Database (Denmark)
Thomassen, Carsten
2016-01-01
We give an elementary proof of the theorem of Nash-Williams that a graph has an edge-decomposition into cycles if and only if it does not contain an odd cut. We also prove that every bridgeless graph has a collection of cycles covering each edge at least once and at most 7 times. The two results...
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
The Slice Algorithm For Irreducible Decomposition of Monomial Ideals
DEFF Research Database (Denmark)
Roune, Bjarke Hammersholt
2009-01-01
Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...
High Performance Polar Decomposition on Distributed Memory Systems
Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.
2016-01-01
The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former
Thermal decomposition of γ-irradiated lead nitrate
International Nuclear Information System (INIS)
Nair, S.M.K.; Kumar, T.S.S.
1990-01-01
The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs
Esposito, A.; Polosa, A.D.
2016-01-01
We propose a new interpretation of the neutral and charged X, Z exotic hadron resonances. Hybridized-tetraquarks are neither purely compact tetraquark states nor bound or loosely bound molecules. The latter would require a negative or zero binding energy whose counterpart in h-tetraquarks is a positive quantity. The formation mechanism of this new class of hadrons is inspired by that of Feshbach metastable states in atomic physics. The recent claim of an exotic resonance in the Bs pi+- channel by the D0 collaboration and the negative result presented subsequently by the LHCb collaboration are understood in this scheme, together with a considerable portion of available data on X, Z particles. Considerations on a state with the same quantum numbers as the X(5568) are also made.
Collective Identity Formation in Hybrid Organizations
DEFF Research Database (Denmark)
Boulongne, Romain; Boxenbaum, Eva
The present article examines the process of collective identity formation in the context of hybrid organizing. Empirically, we investigate hybrid organizing in a collaborative structure at the interface of two heterogeneous organizations in the domain of new renewable energies. We draw...... on the literature on knowledge sharing across organizational boundaries, particularly the notions of transfer, translation and transformation, to examine in real time how knowledge sharing in a hybrid setting contributes (or not) to the emergence of a new collective identity at the interface of two heterogeneous...... organizations. Our findings point to two factors that limit knowledge sharing and hence to new collective identity formation in a hybrid space: 1) ambiguous or multiple organizational roles and 2) strong identities of the collaborating organizations. These findings contribute to illuminating the initial...
Decompositional equivalence: A fundamental symmetry underlying quantum theory
Fields, Chris
2014-01-01
Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?
Climate fails to predict wood decomposition at regional scales
Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King
2014-01-01
Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...
In situ XAS of the solvothermal decomposition of dithiocarbamate complexes
Islam, H.-U.; Roffey, A.; Hollingsworth, N.; Catlow, R.; Wolthers, M.; de Leeuw, N.H.; Bras, W.; Sankar, G.; Hogarth, G.
2012-01-01
An in situ XAS study of the solvothermal decomposition of iron and nickel dithiocarbamate complexes was performed in order to gain understanding of the decomposition mechanisms. This work has given insight into the steps involved in the decomposition, showing variation in reaction pathways between
An efficient and accurate decomposition of the Fermi operator.
Ceriotti, Michele; Kühne, Thomas D; Parrinello, Michele
2008-07-14
We present a method to compute the Fermi function of the Hamiltonian for a system of independent fermions based on an exact decomposition of the grand-canonical potential. This scheme does not rely on the localization of the orbitals and is insensitive to ill-conditioned Hamiltonians. It lends itself naturally to linear scaling as soon as the sparsity of the system's density matrix is exploited. By using a combination of polynomial expansion and Newton-like iterative techniques, an arbitrarily large number of terms can be employed in the expansion, overcoming some of the difficulties encountered in previous papers. Moreover, this hybrid approach allows us to obtain a very favorable scaling of the computational cost with increasing inverse temperature, which makes the method competitive with other Fermi operator expansion techniques. After performing an in-depth theoretical analysis of computational cost and accuracy, we test our approach on the density functional theory Hamiltonian for the metallic phase of the LiAl alloy.
Advanced Oxidation: Oxalate Decomposition Testing With Ozone
International Nuclear Information System (INIS)
Ketusky, E.; Subramanian, K.
2012-01-01
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
Energy Technology Data Exchange (ETDEWEB)
Ketusky, E.; Subramanian, K.
2012-02-29
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration
Mondal, Ashok; Bhattacharya, P S; Saha, Goutam
2011-01-01
During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
Imperiale, Alexandre; Chatillon, Sylvain; Darmon, Michel; Leymarie, Nicolas; Demaldent, Edouard
2018-04-01
The high frequency models gathered in the CIVA software allow fast computations and provide satisfactory quantitative predictions in a wide range of situations. However, the domain of validity of these models is limited since they do not accurately predict the ultrasound response in configurations involving subwavelength complex phenomena. In addition, when modelling backwall breaking defects inspection, an important challenge remains to capture the propagation of the creeping waves that are generated at the critical angle. Hybrid models combining numerical and asymptotic methods have already been shown to be an effective strategy to overcome these limitations in 2D [1]. However, 3D simulations remain a crucial issue for industrial applications because of the computational cost of the numerical solver. A dedicated three dimensional high order finite element model combined with a domain decomposition method has been recently proposed to tackle 3D limitations [2]. In this communication, we will focus on the specific case of planar backwall breaking defects, with an adapted coupling strategy in order to efficiently model the propagation of creeping waves. Numerical and experimental validations will be proposed on various configurations.
Continuity controlled Hybrid Automata
Bergstra, J.A.; Middelburg, C.A.
We investigate the connections between the process algebra for hybrid systems of Bergstra and Middelburg and the formalism of hybrid automata of Henzinger et al. We give interpretations of hybrid automata in the process algebra for hybrid systems and compare them with the standard interpretation
Continuity Controlled Hybrid Automata
Bergstra, J.A.; Middelburg, C.A.
2004-01-01
We investigate the connections between the process algebra for hybrid systems of Bergstra and Middelburg and the formalism of hybrid automata of Henzinger et al. We give interpretations of hybrid automata in the process algebra for hybrid systems and compare them with the standard interpretation of
Continuity controlled hybrid automata
Bergstra, J.A.; Middelburg, C.A.
2004-01-01
We investigate the connections between the process algebra for hybrid systems of Bergstra and Middelburg and the formalism of hybrid automata of Henzinger et al. We give interpretations of hybrid automata in the process algebra for hybrid systems and compare them with the standard interpretation of
Continuity controlled hybrid automata
Bergstra, J.A.; Middelburg, C.A.
2006-01-01
We investigate the connections between the process algebra for hybrid systems of Bergstra and Middelburg and the formalism of hybrid automata of Henzinger et al. We give interpretations of hybrid automata in the process algebra for hybrid systems and compare them with the standard interpretation of
Self-decomposition of radiochemicals. Principles, control, observations and effects
International Nuclear Information System (INIS)
Evans, E.A.
1976-01-01
The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)
Pitfalls in VAR based return decompositions: A clarification
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...
hermal decomposition of irradiated casein molecules
International Nuclear Information System (INIS)
Ali, M.A.; Elsayed, A.A.
1998-01-01
NON-Isothermal studies were carried out using the derivatograph where thermogravimetry (TG) and differential thermogravimetry (DTG) measurements were used to obtain the activation energies of the first and second reactions for casein (glyco-phospho-protein) decomposition before and after exposure to 1 Gy γ-rays and up to 40 x 1 04 μg Gy fast neutrons. 25C f was used as a source of fast neutrons, associated with γ-rays. 137 Cs source was used as pure γ-source. The activation energies for the first and second reactions for casein decomposition were found to be smaller at 400 μGy than that at lower and higher fast neutron doses. However, no change in activation energies was observed after γ-irradiation. it is concluded from the present study that destruction of casein molecules by low level fast neutron doses may lead to changes of shelf storage period of milk
Investigation into kinetics of decomposition of nitrates
International Nuclear Information System (INIS)
Belov, B.A.; Gorozhankin, Eh.V.; Efremov, V.N.; Sal'nikova, N.S.; Suris, A.L.
1985-01-01
Using the method of thermogravimetry, the decomposition of nitrates, Cd(NO 3 ) 2 x4H 2 O, La(NO 3 ) 2 x6H 2 O, Sr(NO 3 ) 2 , ZrO(NO 3 ) 2 x2H 2 O, Y(NO 3 ) 3 x6H 2 O, in particular, is studied in the 20-1000 deg C range. It is shown, that gaseous pyrolysis, products, remaining in the material, hamper greatly the heat transfer required for the decomposition which reduces the reaction order. An effective activation energy of the process is in a satisfactory agreement with the characteristic temperature of the last endotherm. Kinetic parameters are calculated by the minimization method using a computer
Mobility Modelling through Trajectory Decomposition and Prediction
Faghihi, Farbod
2017-01-01
The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...
Thermal decomposition kinetics of ammonium uranyl carbonate
International Nuclear Information System (INIS)
Kim, E.H.; Park, J.J.; Park, J.H.; Chang, I.S.; Choi, C.S.; Kim, S.D.
1994-01-01
The thermal decomposition kinetics of AUC [ammonium uranyl carbonate; (NH 4 ) 4 UO 2 (CO 3 ) 3 [ in an isothermal thermogravimetric (TG) reactor under N 2 atmosphere has been determined. The kinetic data can be represented by the two-dimensional nucleation and growth model. The reaction rate increases and activation energy decreases with increasing particle size and precipitation time which appears in the particle size larger than 30 μm in the mechano-chemical phenomena. (orig.)
Radiation decomposition of technetium-99m radiopharmaceuticals
International Nuclear Information System (INIS)
Billinghurst, M.W.; Rempel, S.; Westendorf, B.A.
1979-01-01
Technetium-99m radiopharmaceuticals are shown to be subject to autoradiation-induced decomposition, which results in increasing abundance of pertechnetate in the preparation. This autodecomposition is catalyzed by the presence of oxygen, although the removal of oxygen does not prevent its occurrence. The initial appearance of pertechnetate in the radiopharmaceutical is shown to be a function of the amount of radioactivity, the quantity of stannous ion used, and the ratio of /sup 99m/Tc to total technetium in the preparation
Information decomposition method to analyze symbolical sequences
International Nuclear Information System (INIS)
Korotkov, E.V.; Korotkova, M.A.; Kudryashov, N.A.
2003-01-01
The information decomposition (ID) method to analyze symbolical sequences is presented. This method allows us to reveal a latent periodicity of any symbolical sequence. The ID method is shown to have advantages in comparison with application of the Fourier transformation, the wavelet transform and the dynamic programming method to look for latent periodicity. Examples of the latent periods for poetic texts, DNA sequences and amino acids are presented. Possible origin of a latent periodicity for different symbolical sequences is discussed
Decomposition of monolithic web application to microservices
Zaymus, Mikulas
2017-01-01
Solteq Oyj has an internal Wellbeing project for massage reservations. The task of this thesis was to transform the monolithic architecture of this application to microservices. The thesis starts with a detailed comparison between microservices and monolithic application. It points out the benefits and disadvantages microservice architecture can bring to the project. Next, it describes the theory and possible strategies that can be used in the process of decomposition of an existing monoli...
Numerical CP Decomposition of Some Difficult Tensors
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Phan, A. H.; Cichocki, A.
2017-01-01
Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385. pdf
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Nonconformity problem in 3D Grid decomposition
Czech Academy of Sciences Publication Activity Database
Kolcun, Alexej
2002-01-01
Roč. 10, č. 1 (2002), s. 249-253 ISSN 1213-6972. [International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002/10./. Plzeň, 04.02.2002-08.02.2002] R&D Projects: GA ČR GA105/99/1229; GA ČR GA105/01/1242 Institutional research plan: CEZ:AV0Z3086906 Keywords : structured mesh * decomposition * nonconformity Subject RIV: BA - General Mathematics
Randomized interpolative decomposition of separated representations
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Decomposition and reduction of AUC in hydrogen
International Nuclear Information System (INIS)
Ge Qingren; Kang Shifang; Zhou Meng
1987-01-01
AUC (Ammonium Uranyl Carbonate) conversion processes have been adopted extensively in nuclear fuel cycle. The kinetics investigation of these processes, however, has not yet been reported in detail at the published literatures. In the present work, the decomposition kinetics of AUC in hydrogen has been determined by non-isothermal method. DSC curves are solved with computer by Ge Qingren method. The results show that the kinetics obeys Avrami-Erofeev equation within 90% conversion. The apparent activation energy and preexponent are found to be 113.0 kJ/mol and 7.11 x 10 11 s -1 respectively. The reduction kinetics of AUC decomposition product in hydrogen at the range of 450 - 600 deg C has been determined by isothermal thermogravimetric method. The results show that good linear relationship can be obtained from the plot of conversion vs time, and that the apparent activation energy is found to be 113.9 kJ/mol. The effects of particle size and partial pressure of hydrogen are examined in reduction of AUC decomposition product. The reduction mechanism and the structure of particle are discussed according to the kinetics behaviour and SEM (scanning electron microscope) photograph
Decomposition of oxalate precipitates by photochemical reaction
International Nuclear Information System (INIS)
Yoo, J.H.; Kim, E.H.
1998-01-01
A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, the photochemical decomposition mechanism of oxalates in the presence of nitric acid was elucidated by experimental work. The decomposition of oxalates was proved to be dominated by the reaction with hydroxyl radical generated from the nitric acid, rather than with nitrite ion also formed from nitrate ion. The decomposition rate of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was found to be 0.003 M/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)
Tensor gauge condition and tensor field decomposition
Zhu, Ben-Chao; Chen, Xiang-Song
2015-10-01
We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.
Thermal decomposition and reaction of confined explosives
International Nuclear Information System (INIS)
Catalano, E.; McGuire, R.; Lee, E.; Wrenn, E.; Ornellas, D.; Walton, J.
1976-01-01
Some new experiments designed to accurately determine the time interval required to produce a reactive event in confined explosives subjected to temperatures which will cause decomposition are described. Geometry and boundary conditions were both well defined so that these experiments on the rapid thermal decomposition of HE are amenable to predictive modelling. Experiments have been carried out on TNT, TATB and on two plastic-bonded HMX-based high explosives, LX-04 and LX-10. When the results of these experiments are plotted as the logarithm of the time to explosion versus 1/T K (Arrhenius plot), the curves produced are remarkably linear. This is in contradiction to the results obtained by an iterative solution of the Laplace equation for a system with a first order rate heat source. Such calculations produce plots which display considerable curvature. The experiments have also shown that the time to explosion is strongly influenced by the void volume in the containment vessel. Results of the experiments with calculations based on the heat flow equations coupled with first-order models of chemical decomposition are compared. The comparisons demonstrate the need for a more realistic reaction model
Gas hydrates forming and decomposition conditions analysis
Directory of Open Access Journals (Sweden)
А. М. Павленко
2017-07-01
Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.
Ahlberg, Johan; Jansson, Anton
2016-01-01
Hybrid securities do not constitute a new phenomenon in the Swedish capital markets. Most commonly, hybrids issued by Swedish real estate companies in recent years are preference shares. Corporate hybrid bonds on the other hand may be considered as somewhat of a new-born child in the family of hybrid instruments. These do, as all other hybrid securities, share some equity-like and some debt-like characteristics. Nevertheless, since 2013 the interest for the instrument has grown rapidly and ha...
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Directory of Open Access Journals (Sweden)
Hui Lu
2014-01-01
Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.
Breaking Dense Structures: Proving Stability of Densely Structured Hybrid Systems
Directory of Open Access Journals (Sweden)
Eike Möhlmann
2015-06-01
Full Text Available Abstraction and refinement is widely used in software development. Such techniques are valuable since they allow to handle even more complex systems. One key point is the ability to decompose a large system into subsystems, analyze those subsystems and deduce properties of the larger system. As cyber-physical systems tend to become more and more complex, such techniques become more appealing. In 2009, Oehlerking and Theel presented a (de-composition technique for hybrid systems. This technique is graph-based and constructs a Lyapunov function for hybrid systems having a complex discrete state space. The technique consists of (1 decomposing the underlying graph of the hybrid system into subgraphs, (2 computing multiple local Lyapunov functions for the subgraphs, and finally (3 composing the local Lyapunov functions into a piecewise Lyapunov function. A Lyapunov function can serve multiple purposes, e.g., it certifies stability or termination of a system or allows to construct invariant sets, which in turn may be used to certify safety and security. In this paper, we propose an improvement to the decomposing technique, which relaxes the graph structure before applying the decomposition technique. Our relaxation significantly reduces the connectivity of the graph by exploiting super-dense switching. The relaxation makes the decomposition technique more efficient on one hand and on the other allows to decompose a wider range of graph structures.
A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting
International Nuclear Information System (INIS)
Tang, Ling; Yu, Lean; Wang, Shuai; Li, Jianping; Wang, Shouyang
2012-01-01
Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.
International Nuclear Information System (INIS)
Heckel, J.
2002-01-01
Full text: In the last 10 years significant innovations of EDXRF, e.g. total reflection XRF or polarized beam XRF, were utilized in different industrial applications. The decrease of background within the spectra was the goal of these developments. Excellent detection limits and sensitivities demonstrate the success of these new techniques. Nevertheless, further improvements are possible by using Si drift detectors. These detectors allow the processing of input count rates up to 10 6 cps in comparison to 10 5 of Si(Li) detectors. New excitation optics are necessary to produce such count rates. One possibility is the use of doubly curved crystals between tube and sample. These crystals enable the reflection of the primary beam within the given solid angle (0.4π) of an end window tube to the sample. Using such brightness optics excellent sensitivities mainly for light elements are achievable. The combination of a BRAGG crystal as a wavelength dispersive component and a solid state detector as an energy dispersive component creates a new technique: hybrid XRF. Copyright (2002) Australian X-ray Analytical Association Inc. Copyright (2002) Australian X-ray Analytical Association Inc
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Hybrid mimics and hybrid vigor in Arabidopsis
Wang, Li; Greaves, Ian K.; Groszmann, Michael; Wu, Li Min; Dennis, Elizabeth S.; Peacock, W. James
2015-01-01
F1 hybrids can outperform their parents in yield and vegetative biomass, features of hybrid vigor that form the basis of the hybrid seed industry. The yield advantage of the F1 is lost in the F2 and subsequent generations. In Arabidopsis, from F2 plants that have a F1-like phenotype, we have by recurrent selection produced pure breeding F5/F6 lines, hybrid mimics, in which the characteristics of the F1 hybrid are stabilized. These hybrid mimic lines, like the F1 hybrid, have larger leaves than the parent plant, and the leaves have increased photosynthetic cell numbers, and in some lines, increased size of cells, suggesting an increased supply of photosynthate. A comparison of the differentially expressed genes in the F1 hybrid with those of eight hybrid mimic lines identified metabolic pathways altered in both; these pathways include down-regulation of defense response pathways and altered abiotic response pathways. F6 hybrid mimic lines are mostly homozygous at each locus in the genome and yet retain the large F1-like phenotype. Many alleles in the F6 plants, when they are homozygous, have expression levels different to the level in the parent. We consider this altered expression to be a consequence of transregulation of genes from one parent by genes from the other parent. Transregulation could also arise from epigenetic modifications in the F1. The pure breeding hybrid mimics have been valuable in probing the mechanisms of hybrid vigor and may also prove to be useful hybrid vigor equivalents in agriculture. PMID:26283378
Bergshoeff, Eric A.; Kleinschmidt, Axel; Riccioni, Fabio
2012-01-01
We classify the half-supersymmetric "domain walls," i.e., branes of codimension one, in toroidally compactified IIA/IIB string theory and show to which gauged supergravity theory each of these domain walls belong. We use as input the requirement of supersymmetric Wess-Zumino terms, the properties of
Effect of sulfation on the surface activity of CaO for N{sub 2}O decomposition
Energy Technology Data Exchange (ETDEWEB)
Wu, Lingnan, E-mail: wulingnan@126.com [School of Energy, Power and Mechanical Engineering, North China Electric Power University, 102206 Beijing (China); National Engineering Laboratory for Biomass Power Generation Equipment, North China Electric Power University, 102206 Beijing (China); Hu, Xiaoying, E-mail: huxy@ncepu.edu.cn [National Engineering Laboratory for Biomass Power Generation Equipment, North China Electric Power University, 102206 Beijing (China); Qin, Wu, E-mail: qinwugx@126.com [National Engineering Laboratory for Biomass Power Generation Equipment, North China Electric Power University, 102206 Beijing (China); Dong, Changqing, E-mail: cqdong1@163.com [National Engineering Laboratory for Biomass Power Generation Equipment, North China Electric Power University, 102206 Beijing (China); Yang, Yongping, E-mail: yypncepu@163.com [School of Energy, Power and Mechanical Engineering, North China Electric Power University, 102206 Beijing (China)
2015-12-01
Graphical abstract: - Highlights: • Sulfation of CaO (1 0 0) surface greatly deactivates its surface activity for N{sub 2}O decomposition. • An increase of sulfation degree leads to a decrease of CaO surface activity for N{sub 2}O decomposition. • Sulfation from CaSO{sub 3} into CaSO{sub 4} is the crucial step for deactivating the surface activity for N{sub 2}O decomposition. • The electronic interaction CaO (1 0 0)/CaSO{sub 4} (0 0 1) interface is limited to the bottom layer of CaSO{sub 4} (0 0 1) and the top layer of CaO (1 0 0). • CaSO{sub 4} (0 0 1) and (0 1 0) surfaces show negligible catalytic ability for N{sub 2}O decomposition. - Abstract: Limestone addition to circulating fluidized bed boilers for sulfur removal affects nitrous oxide (N{sub 2}O) emission at the same time, but mechanism of how sulfation process influences the surface activity of CaO for N{sub 2}O decomposition remains unclear. In this paper, we investigated the effect of sulfation on the surface properties and catalytic activity of CaO for N{sub 2}O decomposition using density functional theory calculations. Sulfation of CaO (1 0 0) surface by the adsorption of a single gaseous SO{sub 2} or SO{sub 3} molecule forms stable local CaSO{sub 3} or CaSO{sub 4} on the CaO (1 0 0) surface with strong hybridization between the S atom of SO{sub x} and the surface O anion. The formed local CaSO{sub 3} increases the barrier energy of N{sub 2}O decomposition from 0.989 eV (on the CaO (1 0 0) surface) to 1.340 eV, and further sulfation into local CaSO{sub 4} remarkably increases the barrier energy to 2.967 eV. Sulfation from CaSO{sub 3} into CaSO{sub 4} is therefore the crucial step for deactivating the surface activity for N{sub 2}O decomposition. Completely sulfated CaSO{sub 4} (0 0 1) and (0 1 0) surfaces further validate the negligible catalytic ability of CaSO{sub 4} for N{sub 2}O decomposition.
Energy Technology Data Exchange (ETDEWEB)
Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)
2016-11-01
The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.
A hybrid approach to fault diagnosis of roller bearings under variable speed conditions
Wang, Yanxue; Yang, Lin; Xiang, Jiawei; Yang, Jianwei; He, Shuilong
2017-12-01
Rolling element bearings are one of the main elements in rotating machines, whose failure may lead to a fatal breakdown and significant economic losses. Conventional vibration-based diagnostic methods are based on the stationary assumption, thus they are not applicable to the diagnosis of bearings working under varying speeds. This constraint limits the bearing diagnosis to the industrial application significantly. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions is proposed in this work, based on computed order tracking (COT) and variational mode decomposition (VMD)-based time frequency representation (VTFR). COT is utilized to resample the non-stationary vibration signal in the angular domain, while VMD is used to decompose the resampled signal into a number of band-limited intrinsic mode functions (BLIMFs). A VTFR is then constructed based on the estimated instantaneous frequency and instantaneous amplitude of each BLIMF. Moreover, the Gini index and time-frequency kurtosis are both proposed to quantitatively measure the sparsity and concentration measurement of time-frequency representation, respectively. The effectiveness of the VTFR for extracting nonlinear components has been verified by a bat signal. Results of this numerical simulation also show the sparsity and concentration of the VTFR are better than those of short-time Fourier transform, continuous wavelet transform, Hilbert-Huang transform and Wigner-Ville distribution techniques. Several experimental results have further demonstrated that the proposed method can well detect bearing faults under variable speed conditions.
International Nuclear Information System (INIS)
Anderson, D.V.; Shumaker, D.E.
1993-01-01
From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. In this hybrid simulation the electrons are modeled as a fluid and the field equations solved are obtained from the electron momentum equation together with the pre-Maxwell equations (displacement current neglected). Either zero or finite electron mass can be used in the electron model. The resulting field equations are solved with an iteratively explicit procedure which is thus trivial to parallelize. Likewise, the field interpolations and the particle pushing is simple to parallelize. The deposition, sorting, and reordering phases are less simple and it is for these that the authors present detailed algorithms. They have now successfully tested the parallel version of HOPS in serial mode and it is now being readied for parallel execution on the Cray C-90. They will then port HOPS to a massively parallel computer, in the next year
Shaqura, Mohammad; Claudel, Christian
2015-01-01
, low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.; PENCHEVA, G.; WHEELER, M. F.; WILDEY, T.
2011-01-01
by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where
Antonietti, P. F.; Ayuso Dios, Blanca; Bertoluzza, S.; Pennacchio, M.
2014-01-01
We propose and study an iterative substructuring method for an h-p Nitsche-type discretization, following the original approach introduced in Bramble et al. Math. Comp. 47(175):103–134, (1986) for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments assess the performance of the preconditioner and verify the theory. © 2014, Springer-Verlag Italia.
Moussawi, Ali; Lubineau, Gilles; Xu, Jiangping; Pan, Bing
2015-01-01
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential
Antonietti, P. F.
2014-05-13
We propose and study an iterative substructuring method for an h-p Nitsche-type discretization, following the original approach introduced in Bramble et al. Math. Comp. 47(175):103–134, (1986) for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments assess the performance of the preconditioner and verify the theory. © 2014, Springer-Verlag Italia.
Domain Decomposition for Computing Extremely Low Frequency Induced Current in the Human Body
Perrussel , Ronan; Voyer , Damien; Nicolas , Laurent; Scorretti , Riccardo; Burais , Noël
2011-01-01
International audience; Computation of electromagnetic fields in high resolution computational phantoms requires solving large linear systems. We present an application of Schwarz preconditioners with Krylov subspace methods for computing extremely low frequency induced fields in a phantom issued from the Visible Human.
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.; Zhang, L.
2002-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.; Cantieni, R.
2001-01-01
A series of 15 progressive damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case with a relatively large number of sensors. Changes in frequencies, damping ratios and MAC values were determined...
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Zhang, Lingmi
2007-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...
Energy Technology Data Exchange (ETDEWEB)
Stathopoulos, A.; Fischer, C.F. [Vanderbilt Univ., Nashville, TN (United States); Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
Robust regularized singular value decomposition with application to mortality data
Zhang, Lingsong
2013-09-01
We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leaveone- row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application. © Institute of Mathematical Statistics, 2013.
Near-field terahertz imaging of ferroelectric domains in barium titanate
Czech Academy of Sciences Publication Activity Database
Berta, Milan; Kadlec, Filip
2010-01-01
Roč. 83, 10-11 (2010), 985-993 ISSN 0141-1594 R&D Projects: GA MŠk LC512 Institutional research plan: CEZ:AV0Z10100520 Keywords : singular value decomposition * domain structure imaging * near-field terahertz microscopy * subwavelength resolution Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.006, year: 2010
Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition
Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato
2018-05-01
We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.
Decomposition of the swirling flow field downstream of Francis turbine runner
International Nuclear Information System (INIS)
Rudolf, P; Štefan, D
2012-01-01
Practical application of proper orthogonal decomposition (POD) is presented. Spatio-temporal behaviour of the coherent vortical structures in the draft tube of hydraulic turbine is studied for two partial load operating points. POD enables to identify the eigen modes, which compose the flow field and rank the modes according to their energy. Swirling flow fields are decomposed, which provides information about their streamwise and crosswise development and the energy transfer among modes. Presented methodology also assigns frequencies to the particular modes, which helps to identify the spectral properties of the flow with concrete mode shapes. Thus POD offers a complementary view to current time domain simulations or measurements.
Ohmichi, Yuya
2017-07-01
In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.
Tomar, Kiledar S.; Kumar, Shashi; Tolpekin, Valentyn A.; Joshi, Sushil K.
2016-05-01
Forests act as sink of carbon and as a result maintains carbon cycle in atmosphere. Deforestation leads to imbalance in global carbon cycle and changes in climate. Hence estimation of forest biophysical parameter like biomass becomes a necessity. PolSAR has the ability to discriminate the share of scattering element like surface, double bounce and volume scattering in a single SAR resolution cell. Studies have shown that volume scattering is a significant parameter for forest biophysical characterization which mainly occurred from vegetation due to randomly oriented structures. This random orientation of forest structure causes shift in orientation angle of polarization ellipse which ultimately disturbs the radar signature and shows overestimation of volume scattering and underestimation of double bounce scattering after decomposition of fully PolSAR data. Hybrid polarimetry has the advantage of zero POA shift due to rotational symmetry followed by the circular transmission of electromagnetic waves. The prime objective of this study was to extract the potential of Hybrid PolSAR and fully PolSAR data for AGB estimation using Extended Water Cloud model. Validation was performed using field biomass. The study site chosen was Barkot Forest, Uttarakhand, India. To obtain the decomposition components, m-alpha and Yamaguchi decomposition modelling for Hybrid and fully PolSAR data were implied respectively. The RGB composite image for both the decomposition techniques has generated. The contribution of all scattering from each plot for m-alpha and Yamaguchi decomposition modelling were extracted. The R2 value for modelled AGB and field biomass from Hybrid PolSAR and fully PolSAR data were found 0.5127 and 0.4625 respectively. The RMSE for Hybrid and fully PolSAR between modelled AGB and field biomass were 63.156 (t ha-1) and 73.424 (t ha-1) respectively. On the basis of RMSE and R2 value, this study suggests Hybrid PolSAR decomposition modelling to retrieve scattering
Thermodynamic anomaly in magnesium hydroxide decomposition
International Nuclear Information System (INIS)
Reis, T.A.
1983-08-01
The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH) 2 (s) = MgO(s) + H 2 O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10 -4 of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH) 2 used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH) 2 are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400 0 C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH) 2 -MgO solid solution during continuous thermal decomposition in Knudsen cells
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
Methyl Iodide Decomposition at BWR Conditions
International Nuclear Information System (INIS)
Pop, Mike; Bell, Merl
2012-09-01
Based on favourable results from short-term testing of methanol addition to an operating BWR plant, AREVA has performed numerous studies in support of necessary Engineering and Plant Safety Evaluations prior to extended injection of methanol. The current paper presents data from a study intended to provide further understanding of the decomposition of methyl iodide as it affects the assessment of methyl iodide formation with the application of methanol at BWR Plants. This paper describes the results of the decomposition testing under UV-C light at laboratory conditions and its effect on the subject methyl iodide production evaluation. The study as to the formation and decomposition of methyl iodide as it is effected by methanol addition is one phase of a larger AREVA effort to provide a generic plant Safety Evaluation prior to long-term methanol injection to an operating BWR. Other testing phases have investigated the compatibility of methanol with fuel construction materials, plant structural materials, plant consumable materials (i.e. elastomers and coatings), and ion exchange resins. Methyl iodide is known to be very unstable, typically preserved with copper metal or other stabilizing materials when produced and stored. It is even more unstable when exposed to light, heat, radiation, and water. Additionally, it is known that methyl iodide will decompose radiolytically, and that this effect may be simulated using ultra-violet radiation (UV-C) [2]. In the tests described in this paper, the use of a UV-C light source provides activation energy for the formation of methyl iodide. Thus is similar to the effect expected from Cherenkov radiation present in a reactor core after shutdown. Based on the testing described in this paper, it is concluded that injection of methanol at concentrations below 2.5 ppm in BWR applications to mitigate IGSCC of internals is inconsequential to the accident conditions postulated in the FSAR as they are related to methyl iodide formation
Task Decomposition Module For Telerobot Trajectory Generation
Wavering, Albert J.; Lumia, Ron
1988-10-01
A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Decomposition kinetics of aminoborane in aqueous solutions
International Nuclear Information System (INIS)
Shvets, I.B.; Erusalimchik, I.G.
1984-01-01
Kinetics of aminoborane hydrolysis has been studied using the method of polarization galvanostatical curves on a platinum electrode in buffer solutions at pH 3; 5; 7. The supposition that the reaction of aminoborane hydrolysis is the reaction of the first order by aminoborane is proved. The rate constant of aminoborane decomposition in the solution with pH 5 is equal to: K=2.5x10 -5 s -1 and with pH 3 it equals K=1.12x10 -4 s -1
Thermal decomposition of uranyl sulphate hydrate
International Nuclear Information System (INIS)
Sato, T.; Ozawa, F.; Ikoma, S.
1980-01-01
The thermal decomposition of uranyl sulphate hydrate (UO 2 SO 4 .3H 2 O) has been investigated by thermogravimetry, differential thermal analysis, X-ray diffraction and infrared spectrophotometry. As a result, it is concluded that uranyl sulphate hydrate decomposes thermally: UO 2 SO 4 .3H 2 O → UO 2 SO 4 .xH 2 O(2.5 = 2 SO 4 . 2H 2 O → UO 2 SO 4 .H 2 O → UO 2 SO 4 → α-UO 2 SO 4 → β-UO 2 SO 4 → U 3 O 8 . (author)