Domain decomposition method for solving elliptic problems in unbounded domains
International Nuclear Information System (INIS)
Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.
1991-01-01
Computational aspects of the box domain decomposition (DD) method for solving boundary value problems in an unbounded domain are discussed. A new variant of the DD-method for elliptic problems in unbounded domains is suggested. It is based on the partitioning of an unbounded domain adapted to the given asymptotic decay of an unknown function at infinity. The comparison of computational expenditures is given for boundary integral method and the suggested DD-algorithm. 29 refs.; 2 figs.; 2 tabs
Domain decomposition methods for fluid dynamics
International Nuclear Information System (INIS)
Clerc, S.
1995-01-01
A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition methods and parallel computing
International Nuclear Information System (INIS)
Meurant, G.
1991-01-01
In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset
Domain decomposition method for solving the neutron diffusion equation
International Nuclear Information System (INIS)
Coulomb, F.
1989-03-01
The aim of this work is to study methods for solving the neutron diffusion equation; we are interested in methods based on a classical finite element discretization and well suited for use on parallel computers. Domain decomposition methods seem to answer this preoccupation. This study deals with a decomposition of the domain. A theoretical study is carried out for Lagrange finite elements and some examples are given; in the case of mixed dual finite elements, the study is based on examples [fr
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane
2010-01-01
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation
Domain decomposition methods and deflated Krylov subspace iterations
Nabben, R.; Vuik, C.
2006-01-01
The balancing Neumann-Neumann (BNN) and the additive coarse grid correction (BPS) preconditioner are fast and successful preconditioners within domain decomposition methods for solving partial differential equations. For certain elliptic problems these preconditioners lead to condition numbers which
A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Mei-qun Jiang; Pei-liang Dai
2006-01-01
A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Domain decomposition methods for the neutron diffusion problem
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2010-01-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)
Domain decomposition methods for core calculations using the MINOS solver
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2007-01-01
Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)
Domain decomposition methods for solving an image problem
Energy Technology Data Exchange (ETDEWEB)
Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
Europlexus: a domain decomposition method in explicit dynamics
International Nuclear Information System (INIS)
Faucher, V.; Hariddh, Bung; Combescure, A.
2003-01-01
Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
Neutron transport solver parallelization using a Domain Decomposition method
International Nuclear Information System (INIS)
Van Criekingen, S.; Nataf, F.; Have, P.
2008-01-01
A domain decomposition (DD) method is investigated for the parallel solution of the second-order even-parity form of the time-independent Boltzmann transport equation. The spatial discretization is performed using finite elements, and the angular discretization using spherical harmonic expansions (P N method). The main idea developed here is due to P.L. Lions. It consists in having sub-domains exchanging not only interface point flux values, but also interface flux 'derivative' values. (The word 'derivative' is here used with quotes, because in the case considered here, it in fact consists in the Ω.∇ operator, with Ω the angular variable vector and ∇ the spatial gradient operator.) A parameter α is introduced, as proportionality coefficient between point flux and 'derivative' values. This parameter can be tuned - so far heuristically - to optimize the method. (authors)
B-spline Collocation with Domain Decomposition Method
International Nuclear Information System (INIS)
Hidayat, M I P; Parman, S; Ariwahjoedi, B
2013-01-01
A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2005-01-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
Parallel algorithms for nuclear reactor analysis via domain decomposition method
International Nuclear Information System (INIS)
Kim, Yong Hee
1995-02-01
In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
International Nuclear Information System (INIS)
Guerin, P.
2007-12-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2003-01-01
The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)
International Nuclear Information System (INIS)
Haeberlein, F.
2011-01-01
Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods
International Nuclear Information System (INIS)
Zhang Hongbo; Wu Hongchun; Cao Liangzhi
2011-01-01
Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering
A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).
Chen, Yuehua; Jin, Guoyong; Liu, Zhigang
2017-05-01
This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
International Nuclear Information System (INIS)
Fischer, J.W.; Azmy, Y.Y.
2003-01-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
International Nuclear Information System (INIS)
Berthe, P.M.
2013-01-01
In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr
Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner
International Nuclear Information System (INIS)
Subber, Waad; Sarkar, Abhijit
2012-01-01
For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.
Two-phase flow steam generator simulations on parallel computers using domain decomposition method
International Nuclear Information System (INIS)
Belliard, M.
2003-01-01
Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)
A non overlapping parallel domain decomposition method applied to the simplified transport equations
International Nuclear Information System (INIS)
Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.
2009-01-01
A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)
Domain decomposition method for dynamic faulting under slip-dependent friction
International Nuclear Information System (INIS)
Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie
2004-01-01
The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Energy Technology Data Exchange (ETDEWEB)
Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
International Nuclear Information System (INIS)
Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
International Nuclear Information System (INIS)
Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
Energy Technology Data Exchange (ETDEWEB)
El-Sayed, A.M.A. [Faculty of Science University of Alexandria (Egypt)]. E-mail: amasyed@hotmail.com; Gaber, M. [Faculty of Education Al-Arish, Suez Canal University (Egypt)]. E-mail: mghf408@hotmail.com
2006-11-20
The Adomian decomposition method has been successively used to find the explicit and numerical solutions of the time fractional partial differential equations. A different examples of special interest with fractional time and space derivatives of order {alpha}, 0<{alpha}=<1 are considered and solved by means of Adomian decomposition method. The behaviour of Adomian solutions and the effects of different values of {alpha} are shown graphically for some examples.
A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Bo-Ao Xu
2016-01-01
Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2010-01-01
Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com
Multilevel domain decomposition for electronic structure calculations
International Nuclear Information System (INIS)
Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.
2007-01-01
We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure
Spatial domain decomposition for neutron transport problems
International Nuclear Information System (INIS)
Yavuz, M.; Larsen, E.W.
1989-01-01
A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)
International Nuclear Information System (INIS)
Chiba, Gou; Tsuji, Masashi; Shimazu, Yoichiro
2001-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) that was developed to solve a two-dimensional neutron diffusion equation has been modified to deal with three-dimensional problems. In the HDD-BEM, the domain is decomposed into homogeneous regions. The boundary conditions on the common inner boundaries between decomposed regions and the neutron multiplication factor are initially assumed. With these assumptions, the neutron diffusion equations defined in decomposed homogeneous regions can be solved respectively by applying the boundary element method. This part corresponds to the 'lower level' calculations. At the 'higher level' calculations, the assumed values, the inner boundary conditions and the neutron multiplication factor, are modified so as to satisfy the continuity conditions for the neutron flux and the neutron currents on the inner boundaries. These procedures of the lower and higher levels are executed alternately and iteratively until the continuity conditions are satisfied within a convergence tolerance. With the hierarchical domain decomposition, it is possible to deal with problems composing a large number of regions, something that has been difficult with the conventional BEM. In this paper, it is showed that a three-dimensional problem even with 722 regions can be solved with a fine accuracy and an acceptable computation time. (author)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
International Nuclear Information System (INIS)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-01-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
International Nuclear Information System (INIS)
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-01-01
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Energy Technology Data Exchange (ETDEWEB)
Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
Domain decomposition multigrid for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-01-01
A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E
2004-12-15
A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.
International Nuclear Information System (INIS)
Ogino, Masao
2016-01-01
Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Energy Technology Data Exchange (ETDEWEB)
Stathopoulos, A.; Fischer, C.F. [Vanderbilt Univ., Nashville, TN (United States); Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
Antonietti, P. F.; Ayuso Dios, Blanca; Bertoluzza, S.; Pennacchio, M.
2014-01-01
We propose and study an iterative substructuring method for an h-p Nitsche-type discretization, following the original approach introduced in Bramble et al. Math. Comp. 47(175):103–134, (1986) for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments assess the performance of the preconditioner and verify the theory. © 2014, Springer-Verlag Italia.
Antonietti, P. F.
2014-05-13
We propose and study an iterative substructuring method for an h-p Nitsche-type discretization, following the original approach introduced in Bramble et al. Math. Comp. 47(175):103–134, (1986) for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments assess the performance of the preconditioner and verify the theory. © 2014, Springer-Verlag Italia.
Directory of Open Access Journals (Sweden)
Daniel Marcsa
2015-01-01
Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
Schultz, A.
2010-12-01
describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Domain decomposition and multilevel integration for fermions
International Nuclear Information System (INIS)
Ce, Marco; Giusti, Leonardo; Schaefer, Stefan
2016-01-01
The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.
Directory of Open Access Journals (Sweden)
Jesús García
2012-01-01
Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
International Nuclear Information System (INIS)
Odry, Nans
2016-01-01
Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
International Nuclear Information System (INIS)
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
International Nuclear Information System (INIS)
Umegaki, Kikuo; Miki, Kazuyoshi
1990-01-01
A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)
Malagón-Romero, A.; Luque, A.
2018-04-01
At high pressure electric discharges typically grow as thin, elongated filaments. In a numerical simulation this large aspect ratio should ideally translate into a narrow, cylindrical computational domain that envelops the discharge as closely as possible. However, the development of the discharge is driven by electrostatic interactions and, if the computational domain is not wide enough, the boundary conditions imposed to the electrostatic potential on the external boundary have a strong effect on the discharge. Most numerical codes circumvent this problem by either using a wide computational domain or by calculating the boundary conditions by integrating the Green's function of an infinite domain. Here we describe an accurate and efficient method to impose free boundary conditions in the radial direction for an elongated electric discharge. To facilitate the use of our method we provide a sample implementation. Finally, we apply the method to solve Poisson's equation in cylindrical coordinates with free boundary conditions in both radial and longitudinal directions. This case is of particular interest for the initial stages of discharges in long gaps or natural discharges in the atmosphere, where it is not practical to extend the simulation volume to be bounded by two electrodes.
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems
Directory of Open Access Journals (Sweden)
Pierre Jolivet
2014-01-01
Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
International Nuclear Information System (INIS)
Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel
2011-01-01
A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
A TFETI domain decomposition solver for elastoplastic problems
Czech Academy of Sciences Publication Activity Database
Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.
2014-01-01
Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7
Non-linear scalable TFETI domain decomposition based contact algorithm
Czech Academy of Sciences Publication Activity Database
Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.
2010-01-01
Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf
International Nuclear Information System (INIS)
Tsuji, Masashi; Chiba, Gou
2000-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan; Kolmbauer, Michael; Langer, Ulrich
2010-01-01
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...
Simulation of two-phase flows by domain decomposition
International Nuclear Information System (INIS)
Dao, T.H.
2013-01-01
This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr
International Nuclear Information System (INIS)
Tang Shaojie; Tang Xiangyang
2012-01-01
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of “salt-and-pepper” noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
Zheng, Xiang; Yang, Chao; Cai, Xiaochuan; Keyes, David E.
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Parallel finite elements with domain decomposition and its pre-processing
International Nuclear Information System (INIS)
Yoshida, A.; Yagawa, G.; Hamada, S.
1993-01-01
This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Energy Technology Data Exchange (ETDEWEB)
Salque, B
1998-07-01
This work deals with the equation of radiosity, this equation describes the transport of light energy through a diffuse medium, its resolution enables us to simulate the presence of light sources. The equation of radiosity is an integral equation who admits a unique solution in realistic cases. The different methods of solving are reviewed. The equation of radiosity can not be formulated as the integral form of a classical partial differential equation, but this work shows that the technique of domain decomposition can be successfully applied to the equation of radiosity if this approach is framed by considerations of physics. This method provides a system of independent equations valid for each sub-domain and whose main parameter is luminance. Some numerical examples give an idea of the convergence of the algorithm. This method is applied to the optimization of the shape of a light reflector.
Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
International Nuclear Information System (INIS)
Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D.; Mostacci, Domiziano
2011-01-01
This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Directory of Open Access Journals (Sweden)
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
International Nuclear Information System (INIS)
Lenain, Roland
2015-01-01
This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems' size ranging from color-sets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each sub-domain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the sub-domains). The multi-group problem is solved in each sub-domain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the sub-domains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
Information decomposition method to analyze symbolical sequences
International Nuclear Information System (INIS)
Korotkov, E.V.; Korotkova, M.A.; Kudryashov, N.A.
2003-01-01
The information decomposition (ID) method to analyze symbolical sequences is presented. This method allows us to reveal a latent periodicity of any symbolical sequence. The ID method is shown to have advantages in comparison with application of the Fourier transformation, the wavelet transform and the dynamic programming method to look for latent periodicity. Examples of the latent periods for poetic texts, DNA sequences and amino acids are presented. Possible origin of a latent periodicity for different symbolical sequences is discussed
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
International Nuclear Information System (INIS)
Azmy, Y.Y.
1997-01-01
The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Energy Technology Data Exchange (ETDEWEB)
Saas, L.
2004-05-01
This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
An operational modal analysis method in frequency and spatial domain
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Investigating hydrogel dosimeter decomposition by chemical methods
International Nuclear Information System (INIS)
Jordan, Kevin
2015-01-01
The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Directory of Open Access Journals (Sweden)
Sheng-Ping Yan
2014-01-01
Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan; Efendiev, Yalchin
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
Domain decomposition techniques for boundary elements application to fluid flow
Brebbia, C A; Skerget, L
2007-01-01
The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
A handbook of decomposition methods in analytical chemistry
International Nuclear Information System (INIS)
Bok, R.
1984-01-01
Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references
Domain decomposition parallel computing for transient two-phase flow of nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.
Modal Identification from Ambient Responses using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Modal Identification from Ambient Responses Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...
Solution of the porous media equation by Adomian's decomposition method
International Nuclear Information System (INIS)
Pamuk, Serdal
2005-01-01
The particular exact solutions of the porous media equation that usually occurs in nonlinear problems of heat and mass transfer, and in biological systems are obtained using Adomian's decomposition method. Also, numerical comparison of particular solutions in the decomposition method indicate that there is a very good agreement between the numerical solutions and particular exact solutions in terms of efficiency and accuracy
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Energy Technology Data Exchange (ETDEWEB)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
International Nuclear Information System (INIS)
Zerr, R.J.; Azmy, Y.Y.
2010-01-01
A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)
Exterior domain problems and decomposition of tensor fields in weighted Sobolev spaces
Schwarz, Günter
1996-01-01
The Hodge decompOsition is a useful tool for tensor analysis on compact manifolds with boundary. This paper aims at generalising the decomposition to exterior domains G ⊂ IR n. Let L 2a Ω k(G) be the space weighted square integrable differential forms with weight function (1 + |χ|²)a, let d a be the weighted perturbation of the exterior derivative and δ a its adjoint. Then L 2a Ω k(G) splits into the orthogonal sum of the subspaces of the d a-exact forms with vanishi...
Multigrid and multilevel domain decomposition for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Chan, T.; Smith, B.
1994-12-31
Multigrid has proven itself to be a very versatile method for the iterative solution of linear and nonlinear systems of equations arising from the discretization of PDES. In some applications, however, no natural multilevel structure of grids is available, and these must be generated as part of the solution procedure. In this presentation the authors will consider the problem of generating a multigrid algorithm when only a fine, unstructured grid is given. Their techniques generate a sequence of coarser grids by first forming an approximate maximal independent set of the vertices and then applying a Cavendish type algorithm to form the coarser triangulation. Numerical tests indicate that convergence using this approach can be as fast as standard multigrid on a structured mesh, at least in two dimensions.
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/ article /pii/S0965997816305737
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition
International Nuclear Information System (INIS)
Clerc, S.
1998-01-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Efficient decomposition and linearization methods for the stochastic transportation problem
International Nuclear Information System (INIS)
Holmberg, K.
1993-01-01
The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)
Empirical projection-based basis-component decomposition method
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
International Nuclear Information System (INIS)
Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit
2017-01-01
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
Energy Technology Data Exchange (ETDEWEB)
Clement, F.; Vodicka, A.; Weis, P. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Martin, V. [Institut National de Recherches Agronomiques (INRA), 92 - Chetenay Malabry (France); Di Cosmo, R. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Paris-7 Univ., 75 (France)
2003-07-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
International Nuclear Information System (INIS)
Clement, F.; Vodicka, A.; Weis, P.; Martin, V.; Di Cosmo, R.
2003-01-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
Method for improved decomposition of metal nitrate solutions
Haas, Paul A.; Stines, William B.
1983-10-11
A method for co-conversion of aqueous solutions of one or more heavy metal nitrates wherein thermal decomposition within a temperature range of about 300.degree. to 800.degree. C. is carried out in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal.
Development of decomposition method for chlorofluorocarbon (CFC) solvent by irradiation
International Nuclear Information System (INIS)
Shimokawa, Toshinari; Nakagawa, Seiko
1995-01-01
CFC is chemically and thermally stable, and almost harmless to human body, therefore, it has been used widely for various industries, in particular as the heat media for air conditioners and the washing agent for semiconductors and printed circuit substrates. In 1974, it was pointed out that CFC causes the breakdown of ozone layer, and the ozone hole was found, consequently, it was decided to limit its use, and to prohibit the production of specific CFC. The development of the decomposition treatment technology for the CFC now in use, which is friendly to the global environment including mankind and ozone layer, is strongly desired. Recently, the authors have examined the decomposition treatment method for specific CFC solvents by irradiation, and obtained the interesting knowledge. For the experiment, the CFC 113 was used, and its chemical structure is shown. The experimental method is explained. As the results, the effect of hydroxide ions, the decomposition products such as CFC 123, and the presumption of the mechanism of the chain dechlorination reaction of CFC 113 are reported. The irradiation decomposition method was compared with various other methods, and the cost of treatment is high. The development for hereafter is mentioned. (K.I.)
Speckle imaging using the principle value decomposition method
International Nuclear Information System (INIS)
Sherman, J.W.
1978-01-01
Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures
International Nuclear Information System (INIS)
Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.
2009-01-01
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented
A novel method for EMG decomposition based on matched filters
Directory of Open Access Journals (Sweden)
Ailton Luiz Dias Siqueira Júnior
Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Energy Technology Data Exchange (ETDEWEB)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
Adomian decomposition method for nonlinear Sturm-Liouville problems
Directory of Open Access Journals (Sweden)
Sennur Somali
2007-09-01
Full Text Available In this paper the Adomian decomposition method is applied to the nonlinear Sturm-Liouville problem-y" + y(tp=λy(t, y(t > 0, t ∈ I = (0, 1, y(0 = y(1 = 0, where p > 1 is a constant and λ > 0 is an eigenvalue parameter. Also, the eigenvalues and the behavior of eigenfuctions of the problem are demonstrated.
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
International Nuclear Information System (INIS)
Ragusa, J.C.
2003-01-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Decomposition of spectra in EPR dosimetry using the matrix method
International Nuclear Information System (INIS)
Sholom, S.V.; Chumak, V.V.
2003-01-01
The matrix method of EPR spectra decomposition is developed and adapted for routine application in retrospective EPR dosimetry with teeth. According to this method, the initial EPR spectra are decomposed (using methods of matrix algebra) into several reference components (reference matrices) that are specific for each material. Proposed procedure has been tested on the example of tooth enamel. Reference spectra were a spectrum of an empty sample tube and three standard signals of enamel (two at g=2.0045, both for the native signal and one at g perpendicular =2.0018, g parallel =1.9973 for the dosimetric signal). Values of dosimetric signals obtained using the given method have been compared with data obtained by manual manipulation of spectra, and good coincidence was observed. This allows considering the proposed method as potent for application in routine EPR dosimetry
Central limit theorems for large graphs: Method of quantum decomposition
International Nuclear Information System (INIS)
Hashimoto, Yukihiro; Hora, Akihito; Obata, Nobuaki
2003-01-01
A new method is proposed for investigating spectral distribution of the combinatorial Laplacian (adjacency matrix) of a large regular graph on the basis of quantum decomposition and quantum central limit theorem. General results are proved for Cayley graphs of discrete groups and for distance-regular graphs. The Coxeter groups and the Johnson graphs are discussed in detail by way of illustration. In particular, the limit distributions obtained from the Johnson graphs are characterized by the Meixner polynomials which form a one-parameter deformation of the Laguerre polynomials
International Nuclear Information System (INIS)
Kuo, V.
2016-01-01
Full text: The European Qualifications Framework categorizes learning objectives into three qualifiers “knowledge”, “skills”, and “competences” (KSCs) to help improve the comparability between different fields and disciplines. However, the management of KSCs remains a great challenge given their semantic fuzziness. Similar texts may describe different concepts and different texts may describe similar concepts among different domains. This is difficult for the indexing, searching and matching of semantically similar KSCs within an information system, to facilitate transfer and mobility of KSCs. We present a working example using a semantic inference method known as Latent Semantic Analysis, employing a matrix operation called Singular Value Decomposition, which have been shown to infer semantic associations within unstructured textual data comparable to that of human interpretations. In our example, a few natural language text passages representing KSCs in the nuclear sector are used to demonstrate the capabilities of the system. It can be shown that LSA is able to infer latent semantic associations between texts, and cluster and match separate text passages semantically based on these associations. We propose this methodology for modelling existing natural language KSCs in the nuclear domain so they can be semantically queried, retrieved and filtered upon request. (author
Decomposition method for analysis of closed queuing networks
Directory of Open Access Journals (Sweden)
Yu. G. Nesterov
2014-01-01
Full Text Available This article deals with the method to estimate the average residence time in nodes of closed queuing networks with priorities and a wide range of conservative disciplines to be served. The method is based on a decomposition of entire closed queuing network into a set of simple basic queuing systems such as M|GI|m|N for each node. The unknown average residence times in the network nodes are interrelated through a system of nonlinear equations. The fact that there is a solution of this system has been proved. An iterative procedure based on Newton-Kantorovich method is proposed for finding the solution of such system. This procedure provides fast convergence to solution. Today possibilities of proposed method are limited by known analytical solutions for simple basic queuing systems of M|GI|m|N type.
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Iterative Refinement Methods for Time-Domain Equalizer Design
Directory of Open Access Journals (Sweden)
Evans Brian L
2006-01-01
Full Text Available Commonly used time domain equalizer (TEQ design methods have been recently unified as an optimization problem involving an objective function in the form of a Rayleigh quotient. The direct generalized eigenvalue solution relies on matrix decompositions. To reduce implementation complexity, we propose an iterative refinement approach in which the TEQ length starts at two taps and increases by one tap at each iteration. Each iteration involves matrix-vector multiplications and vector additions with matrices and two-element vectors. At each iteration, the optimization of the objective function either improves or the approach terminates. The iterative refinement approach provides a range of communication performance versus implementation complexity tradeoffs for any TEQ method that fits the Rayleigh quotient framework. We apply the proposed approach to three such TEQ design methods: maximum shortening signal-to-noise ratio, minimum intersymbol interference, and minimum delay spread.
Construct solitary solutions of discrete hybrid equation by Adomian Decomposition Method
International Nuclear Information System (INIS)
Wang Zhen; Zhang Hongqing
2009-01-01
In this paper, we apply the Adomian Decomposition Method to solving the differential-difference equations. A typical example is applied to illustrate the validity and the great potential of the Adomian Decomposition Method in solving differential-difference equation. Kink shaped solitary solution and Bell shaped solitary solution are presented. Comparisons are made between the results of the proposed method and exact solutions. The results show that the Adomian Decomposition Method is an attractive method in solving the differential-difference equations.
International Nuclear Information System (INIS)
Lee, Yoon Hee; Cho, Nam Zin
2016-01-01
The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.
Energy Technology Data Exchange (ETDEWEB)
Lee, Yoon Hee; Cho, Nam Zin [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.
Zampini, Stefano; Tu, Xuemin
2017-01-01
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Carstensen, Jens Michael
We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Willems, Joerg
2012-01-01
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract
Income Inequality Decomposition, Russia 1992-2002: Method and Application
Directory of Open Access Journals (Sweden)
Wim Jansen
2013-11-01
Full Text Available Decomposition methods for income inequality measures, such as the Gini index and the members of the Generalised Entropy family, are widely applied. Most methods decompose income inequality into a between (explained and a within (unexplained part, according to two or more population subgroups or income sources. In this article, we use a regression analysis for a lognormal distribution of personal income, modelling both the mean and the variance, decomposing the variance as a measure of income inequality, and apply the method to survey data from Russia spanning the first decade of market transition (1992-2002. For the first years of the transition, only a small part of the income inequality could be explained. Thereafter, between 1996 and 1999, a larger part (up to 40% could be explained, and ‘winner’ and ‘loser’ categories of the transition could be spotted. Moving to the upper end of the income distribution, the self-employed won from the transition. The unemployed were among the losers.
AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION
Directory of Open Access Journals (Sweden)
J. Fan
2018-04-01
Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
PZT Films Fabricated by Metal Organic Decomposition Method
Sobolev, Vladimir; Ishchuk, Valeriy
2014-03-01
High quality lead zirconate titanate films have been fabricated on different substrates by metal organic decomposition method and their ferroelectric properties have been investigated. Main attention was paid to studies of the influence of the buffer layer with conditional composition Pb1.3(Zr0.5Ti0.5) O3 on the properties of Pb(Zr0.5Ti0.5) O3 films fabricated on the polycrystalline titanium and platinum substrates. It is found that in the films on the Pt substrate (with or without the buffer layer) the dependencies of the remanent polarization and the coercivity field on the number of switching cycles do not manifest fatigue up to 109 cycles. The remanent polarization dependencies for films on the Ti substrate with the buffer layer containing an excess of PbO demonstrate an fundamentally new feature that consists of a remanent polarization increase after 108 switching cycles. The increase of remanent polarization is about 50% when the number of cycles approaches 1010, while the increase of the coercivity field is small. A monotonic increase of dielectric losses has been observed in all cases.
Synthesis of magnetite nanoparticles obtained by the thermal decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fonseca, Renilma de Sousa Pinheiro; Sinfronio, Francisco Savio Mendes; Menezes, Alan Silva de; Sharma, Surender Kumar; Silva, Fernando Carvalho, E-mail: renilma.ufma@gmail.com [Universidade Federal do Maranhao (UFMA), Sao Luis, MA (Brazil); Moscoso-Londono, Oscar; Muraca, Diego; Knobel, Marcelo [Universidade Estadual de Campinas (UNICAMP), SP (Brazil)
2016-07-01
Full text: Magnetite nanoparticles have found numerous applications in biomedicine such as magnetic separation, drug delivery, magnetic resonance imaging (MRI) and hyperthermia agents [1]. These features are related to their superparamagnetic behavior, low toxicity and high functionalization [2]. Thus, this work aims to obtain oleylamine-coated magnetite nanoparticles by means of thermal decomposition method at different temperatures and reaction time. All samples were characterized by FTIR, XRD and SQUID magnetometer. The infrared spectra showed two vibrational modes at 2920 and 2850 cm{sup -1}, assigned to the asymmetrical and symmetrical stretching of C-H groups of the oleic acid and oleylamine, respectively. The XRD pattern of the samples confirmed the formation of magnetite phase (ICSD 36314) at all temperatures. The average size of the crystallites was determined by Debye-Scherrer equation with values in the range of 1.1-1.5 nm. Field-cooled and zero field-cooled analysis demonstrate that the blocking temperature (T{sub B}) is below room temperature in all cases, indicating that all magnetite nanoparticles are superparamagnetic at room temperature and ferrimagnetic at low temperature. (author)
Synthesis of magnetite nanoparticles obtained by the thermal decomposition method
International Nuclear Information System (INIS)
Fonseca, Renilma de Sousa Pinheiro; Sinfronio, Francisco Savio Mendes; Menezes, Alan Silva de; Sharma, Surender Kumar; Silva, Fernando Carvalho; Moscoso-Londono, Oscar; Muraca, Diego; Knobel, Marcelo
2016-01-01
Full text: Magnetite nanoparticles have found numerous applications in biomedicine such as magnetic separation, drug delivery, magnetic resonance imaging (MRI) and hyperthermia agents [1]. These features are related to their superparamagnetic behavior, low toxicity and high functionalization [2]. Thus, this work aims to obtain oleylamine-coated magnetite nanoparticles by means of thermal decomposition method at different temperatures and reaction time. All samples were characterized by FTIR, XRD and SQUID magnetometer. The infrared spectra showed two vibrational modes at 2920 and 2850 cm -1 , assigned to the asymmetrical and symmetrical stretching of C-H groups of the oleic acid and oleylamine, respectively. The XRD pattern of the samples confirmed the formation of magnetite phase (ICSD 36314) at all temperatures. The average size of the crystallites was determined by Debye-Scherrer equation with values in the range of 1.1-1.5 nm. Field-cooled and zero field-cooled analysis demonstrate that the blocking temperature (T B ) is below room temperature in all cases, indicating that all magnetite nanoparticles are superparamagnetic at room temperature and ferrimagnetic at low temperature. (author)
International Nuclear Information System (INIS)
Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted
2017-01-01
This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Energy Technology Data Exchange (ETDEWEB)
Slattery, S. R.; Wilson, P. P. H. [Engineering Physics Department, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Evans, T. M. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37830 (United States)
2013-07-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
International Nuclear Information System (INIS)
Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.
2013-01-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
International Nuclear Information System (INIS)
Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L
2015-01-01
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR
The time domain triple probe method
International Nuclear Information System (INIS)
Meier, M.A.; Hallock, G.A.; Tsui, H.Y.W.; Bengtson, R.D.
1994-01-01
A new Langmuir probe technique based on the triple probe method is being developed to provide simultaneous measurement of plasma temperature, potential, and density with the temporal and spatial resolution required to accurately characterize plasma turbulence. When the conventional triple probe method is used in an inhomogeneous plasma, local differences in the plasma measured at each probe introduce significant error in the estimation of turbulence parameters. The Time Domain Triple Probe method (TDTP) uses high speed switching of Langmuir probe potential, rather than spatially separated probes, to gather the triple probe information thus avoiding these errors. Analysis indicates that plasma response times and recent electronics technology meet the requirements to implement the TDTP method. Data reduction techniques of TDTP data are to include linear and higher order correlation analysis to estimate fluctuation induced particle and thermal transport, as well as energy relationships between temperature, density, and potential fluctuations
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Directory of Open Access Journals (Sweden)
Jinlu Sheng
2016-07-01
Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.
New parallel SOR method by domain partitioning
Energy Technology Data Exchange (ETDEWEB)
Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)
1996-12-31
In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.
An efficient domain decomposition strategy for wave loads on surface piercing circular cylinders
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
A fully nonlinear domain decomposed solver is proposed for efficient computations of wave loads on surface piercing structures in the time domain. A fully nonlinear potential flow solver was combined with a fully nonlinear Navier–Stokes/VOF solver via generalized coupling zones of arbitrary shape....... Sensitivity tests of the extent of the inner Navier–Stokes/VOF domain were carried out. Numerical computations of wave loads on surface piercing circular cylinders at intermediate water depths are presented. Four different test cases of increasing complexity were considered; 1) weakly nonlinear regular waves...
Singular value decomposition methods for wave propagation analysis
Czech Academy of Sciences Publication Activity Database
Santolík, Ondřej; Parrot, M.; Lefeuvre, F.
2003-01-01
Roč. 38, č. 1 (2003), s. 10-1-10-13 ISSN 0048-6604 R&D Projects: GA ČR GA205/01/1064 Grant - others:Barrande(CZ) 98039/98055 Institutional research plan: CEZ:AV0Z3042911; CEZ:MSM 113200004 Keywords : wave propagation * singular value decomposition Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.832, year: 2003
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
Spectral element method for elastic and acoustic waves in frequency domain
Energy Technology Data Exchange (ETDEWEB)
Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min; Zhuang, Mingwei [Institute of Electromagnetics and Acoustics, and Department of Electronic Science, Xiamen, 361005 (China); Liu, Na, E-mail: liuna@xmu.edu.cn [Institute of Electromagnetics and Acoustics, and Department of Electronic Science, Xiamen, 361005 (China); Liu, Qing Huo, E-mail: qhliu@duke.edu [Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708 (United States)
2016-12-15
Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the use of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.
2014-01-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must
Coupled singular and non singular thermoelastic system and double lapalce decomposition methods
Hassan Gadain; Hassan Gadain
2016-01-01
In this paper, the double Laplace decomposition methods are applied to solve the non singular and singular one dimensional thermo-elasticity coupled system and. The technique is described and illustrated with some examples
A posteriori error analysis of multiscale operator decomposition methods for multiphysics models
International Nuclear Information System (INIS)
Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T
2008-01-01
Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples
Energy Technology Data Exchange (ETDEWEB)
Aarao, J; Bradshaw-Hajek, B H; Miklavcic, S J; Ward, D A, E-mail: Stan.Miklavcic@unisa.edu.a [School of Mathematics and Statistics, University of South Australia, Mawson Lakes, SA 5095 (Australia)
2010-05-07
Standard analytical solutions to elliptic boundary value problems on asymmetric domains are rarely, if ever, obtainable. In this paper, we propose a solution technique wherein we embed the original domain into one with simple boundaries where the classical eigenfunction solution approach can be used. The solution in the larger domain, when restricted to the original domain, is then the solution of the original boundary value problem. We call this the extended-domain-eigenfunction method. To illustrate the method's strength and scope, we apply it to Laplace's equation on an annular-like domain.
Modal Identification of Output-Only Systems using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Directory of Open Access Journals (Sweden)
Duo Hao
2017-11-01
Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
Directory of Open Access Journals (Sweden)
Mishra Vinod
2016-01-01
Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.
A multi-domain spectral method for time-fractional differential equations
Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.
2015-07-01
This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.
Václav URUBA
2010-01-01
Separation of the turbulent boundary layer (BL) on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition) and its extension BOD (Bi-Orthogonal Decomposition) techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns) method. The study contributes...
Accuracy of the Adomian decomposition method applied to the Lorenz system
International Nuclear Information System (INIS)
Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.
2006-01-01
In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one
Model-free method for isothermal and non-isothermal decomposition kinetics analysis of PET sample
International Nuclear Information System (INIS)
Saha, B.; Maiti, A.K.; Ghoshal, A.K.
2006-01-01
Pyrolysis, one possible alternative to recover valuable products from waste plastics, has recently been the subject of renewed interest. In the present study, the isoconversion methods, i.e., Vyazovkin model-free approach is applied to study non-isothermal decomposition kinetics of waste PET samples using various temperature integral approximations such as Coats and Redfern, Gorbachev, and Agrawal and Sivasubramanian approximation and direct integration (recursive adaptive Simpson quadrature scheme) to analyze the decomposition kinetics. The results show that activation energy (E α ) is a weak but increasing function of conversion (α) in case of non-isothermal decomposition and strong and decreasing function of conversion in case of isothermal decomposition. This indicates possible existence of nucleation, nuclei growth and gas diffusion mechanism during non-isothermal pyrolysis and nucleation and gas diffusion mechanism during isothermal pyrolysis. Optimum E α dependencies on α obtained for non-isothermal data showed similar nature for all the types of temperature integral approximations
A parabolic velocity-decomposition method for wind turbines
Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.
2017-02-01
An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.
Wavelet Decomposition Method for $L_2/$/TV-Image Deblurring
Fornasier, M.
2012-07-17
In this paper, we show additional properties of the limit of a sequence produced by the subspace correction algorithm proposed by Fornasier and Schönlieb [SIAM J. Numer. Anal., 47 (2009), pp. 3397-3428 for L 2/TV-minimization problems. An important but missing property of such a limiting sequence in that paper is the convergence to a minimizer of the original minimization problem, which was obtained in [M. Fornasier, A. Langer, and C.-B. Schönlieb, Numer. Math., 116 (2010), pp. 645-685 with an additional condition of overlapping subdomains. We can now determine when the limit is indeed a minimizer of the original problem. Inspired by the work of Vonesch and Unser [IEEE Trans. Image Process., 18 (2009), pp. 509-523], we adapt and specify this algorithm to the case of an orthogonal wavelet space decomposition for deblurring problems and provide an equivalence condition to the convergence of such a limiting sequence to a minimizer. We also provide a counterexample of a limiting sequence by the algorithm that does not converge to a minimizer, which shows the necessity of our analysis of the minimizing algorithm. © 2012 Society for Industrial and Applied Mathematics.
International Nuclear Information System (INIS)
Abdel-Halim Hassan, I.H.
2008-01-01
In this paper, we will compare the differential transformation method DTM and Adomian decomposition method ADM to solve partial differential equations (PDEs). The definition and operations of differential transform method was introduced by Zhou [Zhou JK. Differential transformation and its application for electrical circuits. Wuuhahn, China: Huarjung University Press; 1986 [in Chinese
Elastic frequency-domain finite-difference contrast source inversion method
International Nuclear Information System (INIS)
He, Qinglong; Chen, Yong; Han, Bo; Li, Yang
2016-01-01
In this work, we extend the finite-difference contrast source inversion (FD-CSI) method to the frequency-domain elastic wave equations, where the parameters describing the subsurface structure are simultaneously reconstructed. The FD-CSI method is an iterative nonlinear inversion method, which exhibits several strengths. First, the finite-difference operator only relies on the background media and the given angular frequency, both of which are unchanged during inversion. Therefore, the matrix decomposition is performed only once at the beginning of the iteration if a direct solver is employed. This makes the inversion process relatively efficient in terms of the computational cost. In addition, the FD-CSI method automatically normalizes different parameters, which could avoid the numerical problems arising from the difference of the parameter magnitude. We exploit a parallel implementation of the FD-CSI method based on the domain decomposition method, ensuring a satisfactory scalability for large-scale problems. A simple numerical example with a homogeneous background medium is used to investigate the convergence of the elastic FD-CSI method. Moreover, the Marmousi II model proposed as a benchmark for testing seismic imaging methods is presented to demonstrate the performance of the elastic FD-CSI method in an inhomogeneous background medium. (paper)
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
Finding all real roots of a polynomial by matrix algebra and the Adomian decomposition method
Directory of Open Access Journals (Sweden)
Hooman Fatoorehchi
2014-10-01
Full Text Available In this paper, we put forth a combined method for calculation of all real zeroes of a polynomial equation through the Adomian decomposition method equipped with a number of developed theorems from matrix algebra. These auxiliary theorems are associated with eigenvalues of matrices and enable convergence of the Adomian decomposition method toward different real roots of the target polynomial equation. To further improve the computational speed of our technique, a nonlinear convergence accelerator known as the Shanks transform has optionally been employed. For the sake of illustration, a number of numerical examples are given.
The ethnographic method and its relationship with the domain analysis
Directory of Open Access Journals (Sweden)
Manuel Alejandro Romero Quesada
2016-03-01
Full Text Available This paper analyzes the theoretical and conceptual relationship of the ethnographic method with domain analysis. A documentary analysis was performed, exploring the categories of domain analysis and ethnographic method.It was obtained as a result: the analysis of the points of contact between domain analysis and the ethnographic method from an epistemological, methodological and procedural terms. It is concluded that the ethnographic method is an important research tool to scan the turbulent socio-cultural scenarios that occur within discursive communities that constitute the domains of knowledge.
Solving Fokker-Planck Equations on Cantor Sets Using Local Fractional Decomposition Method
Directory of Open Access Journals (Sweden)
Shao-Hong Yan
2014-01-01
Full Text Available The local fractional decomposition method is applied to approximate the solutions for Fokker-Planck equations on Cantor sets with local fractional derivative. The obtained results give the present method that is very effective and simple for solving the differential equations on Cantor set.
Yousef, Hamood Mohammed; Ismail, Ahmad Izani
2017-11-01
In this paper, Laplace Adomian decomposition method (LADM) was applied to solve Delay differential equations with Boundary Value Problems. The solution is in the form of a convergent series which is easy to compute. This approach is tested on two test problem. The findings obtained exhibit the reliability and efficiency of the proposed method.
Displacement decomposition and parallelisation of the PCG method for elasticity problems
Czech Academy of Sciences Publication Activity Database
Blaheta, Radim; Jakl, Ondřej; Starý, Jiří
1., 2/3/4 (2005), s. 183-191 ISSN 1742-7185 R&D Projects: GA AV ČR(CZ) IBS3086102 Institutional research plan: CEZ:AV0Z30860518 Keywords : finite element method * preconditioned conjugate gradient method * displacement decomposition Subject RIV: BA - General Mathematics
An Alternative Method to the Classical Partial Fraction Decomposition
Cherif, Chokri
2007-01-01
PreCalculus students can use the Completing the Square Method to solve quadratic equations without the need to memorize the quadratic formula since this method naturally leads them to that formula. Calculus students, when studying integration, use various standard methods to compute integrals depending on the type of function to be integrated.…
Domain Decomposition for Computing Extremely Low Frequency Induced Current in the Human Body
Perrussel , Ronan; Voyer , Damien; Nicolas , Laurent; Scorretti , Riccardo; Burais , Noël
2011-01-01
International audience; Computation of electromagnetic fields in high resolution computational phantoms requires solving large linear systems. We present an application of Schwarz preconditioners with Krylov subspace methods for computing extremely low frequency induced fields in a phantom issued from the Visible Human.
International Nuclear Information System (INIS)
Song Lina; Wang Weiguo
2010-01-01
In this Letter, an enhanced Adomian decomposition method which introduces the h-curve of the homotopy analysis method into the standard Adomian decomposition method is proposed. Some examples prove that this method can derive successfully approximate rational Jacobi elliptic function solutions of the fractional differential equations.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
The method of images and Green's function for spherical domains
International Nuclear Information System (INIS)
Gutkin, Eugene; Newton, Paul K
2004-01-01
Motivated by problems in electrostatics and vortex dynamics, we develop two general methods for constructing Green's function for simply connected domains on the surface of the unit sphere. We prove a Riemann mapping theorem showing that such domains can be conformally mapped to the upper hemisphere. We then categorize all domains on the sphere for which Green's function can be constructed by an extension of the classical method of images. We illustrate our methods by several examples, such as the upper hemisphere, geodesic triangles, and latitudinal rectangles. We describe the point vortex motion in these domains, which is governed by a Hamiltonian determined by the Dirichlet Green's function
A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods
Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.
2001-01-01
In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.
Kernel based eigenvalue-decomposition methods for analysing ham
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
IMF-Slices for GPR Data Processing Using Variational Mode Decomposition Method
Directory of Open Access Journals (Sweden)
Xuebing Zhang
2018-03-01
Full Text Available Using traditional time-frequency analysis methods, it is possible to delineate the time-frequency structures of ground-penetrating radar (GPR data. A series of applications based on time-frequency analysis were proposed for the GPR data processing and imaging. With respect to signal processing, GPR data are typically non-stationary, which limits the applications of these methods moving forward. Empirical mode decomposition (EMD provides alternative solutions with a fresh perspective. With EMD, GPR data are decomposed into a set of sub-components, i.e., the intrinsic mode functions (IMFs. However, the mode-mixing effect may also bring some negatives. To utilize the IMFs’ benefits, and avoid the negatives of the EMD, we introduce a new decomposition scheme termed variational mode decomposition (VMD for GPR data processing for imaging. Based on the decomposition results of the VMD, we propose a new method which we refer as “the IMF-slice”. In the proposed method, the IMFs are generated by the VMD trace by trace, and then each IMF is sorted and recorded into different profiles (i.e., the IMF-slices according to its center frequency. Using IMF-slices, the GPR data can be divided into several IMF-slices, each of which delineates a main vibration mode, and some subsurface layers and geophysical events can be identified more clearly. The effectiveness of the proposed method is tested using synthetic benchmark signals, laboratory data and the field dataset.
Relaxation and decomposition methods for mixed integer nonlinear programming
Nowak, Ivo; Bank, RE
2005-01-01
This book presents a comprehensive description of efficient methods for solving nonconvex mixed integer nonlinear programs, including several numerical and theoretical results, which are presented here for the first time. It contains many illustrations and an up-to-date bibliography. Because on the emphasis on practical methods, as well as the introduction into the basic theory, the book is accessible to a wide audience. It can be used both as a research and as a graduate text.
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
Directory of Open Access Journals (Sweden)
Václav URUBA
2010-12-01
Full Text Available Separation of the turbulent boundary layer (BL on a flat plate under adverse pressure gradient was studied experimentally using Time-Resolved PIV technique. The results of spatio-temporal analysis of flow-field in the separation zone are presented. For this purpose, the POD (Proper Orthogonal Decomposition and its extension BOD (Bi-Orthogonal Decomposition techniques are applied as well as dynamical approach based on POPs (Principal Oscillation Patterns method. The study contributes to understanding physical mechanisms of a boundary layer separation process. The acquired information could be used to improve strategies of a boundary layer separation control.
Autoclave decomposition method for metals in soils and sediments.
Navarrete-López, M; Jonathan, M P; Rodríguez-Espinosa, P F; Salgado-Galeana, J A
2012-04-01
Leaching of partially leached metals (Fe, Mn, Cd, Co, Cu, Ni, Pb, and Zn) was done using autoclave technique which was modified based on EPA 3051A digestion technique. The autoclave method was developed as an alternative to the regular digestion procedure passed the safety norms for partial extraction of metals in polytetrafluoroethylene (PFA vessel) with a low constant temperature (119.5° ± 1.5°C) and the recovery of elements were also precise. The autoclave method was also validated using two Standard Reference Materials (SRMs: Loam Soil B and Loam Soil D) and the recoveries were equally superior to the traditionally established digestion methods. Application of the autoclave was samples from different natural environments (beach, mangrove, river, and city soil) to reproduce the recovery of elements during subsequent analysis.
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
Spectral methods. Fundamentals in single domains
International Nuclear Information System (INIS)
Canuto, C.
2006-01-01
Since the publication of ''Spectral Methods in Fluid Dynamics'' 1988, spectral methods have become firmly established as a mainstream tool for scientific and engineering computation. The authors of that book have incorporated into this new edition the many improvements in the algorithms and the theory of spectral methods that have been made since then. This latest book retains the tight integration between the theoretical and practical aspects of spectral methods, and the chapters are enhanced with material on the Galerkin with numerical integration version of spectral methods. The discussion of direct and iterative solution methods is also greatly expanded. (orig.)
Power system frequency estimation based on an orthogonal decomposition method
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Energy Technology Data Exchange (ETDEWEB)
Mehboob, Shoaib, E-mail: smehboob@pieas.edu.pk [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Mehmood, Mazhar [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmed, Mushtaq [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Ahmad, Jamil; Tanvir, Muhammad Tauseef [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmad, Izhar [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Hassan, Syed Mujtaba ul [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan)
2017-04-15
The objective of this work is to study the changes in optical and dielectric properties with the transformation of aluminum ammonium carbonate hydroxide (AACH) to α-alumina, using terahertz time domain spectroscopy (THz-TDS). The nanostructured AACH was synthesized by hydrothermal treatment of the raw chemicals at 140 °C for 12 h. This AACH was then calcined at different temperatures. The AACH was decomposed to amorphous phase at 400 °C and transformed to δ* + α-alumina at 1000 °C. Finally, the crystalline α-alumina was achieved at 1200 °C. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy were employed to identify the phases formed after calcination. The morphology of samples was studied using scanning electron microscopy (SEM), which revealed that the AACH sample had rod-like morphology which was retained in the calcined samples. THz-TDS measurements showed that AACH had lowest refractive index in the frequency range of measurements. The refractive index at 0.1 THZ increased from 2.41 for AACH to 2.58 for the amorphous phase and to 2.87 for the crystalline α-alumina. The real part of complex permittivity increased with the calcination temperature. Further, the absorption coefficient was highest for AACH, which reduced with calcination temperature. The amorphous phase had higher absorption coefficient than the crystalline alumina. - Highlights: • Aluminum oxide nanostructures were obtained by thermal decomposition of AACH. • Crystalline phases of aluminum oxide have higher refractive index than that of amorphous phase. • The removal of heavier ionic species led to the lower absorption of THz radiations.
Multistage principal component analysis based method for abdominal ECG decomposition
International Nuclear Information System (INIS)
Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas
2015-01-01
Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)
A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery
Directory of Open Access Journals (Sweden)
M. Babul Hasan
2007-01-01
The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
Energy Technology Data Exchange (ETDEWEB)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Elizondo, Marcelo A.; Samaan, Nader A.
2017-10-19
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage control problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Directory of Open Access Journals (Sweden)
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
Directory of Open Access Journals (Sweden)
MOHAMED KEZZAR
2015-08-01
Full Text Available In this research, an efficient technique of computation considered as a modified decomposition method was proposed and then successfully applied for solving the nonlinear problem of the two dimensional flow of an incompressible viscous fluid between nonparallel plane walls. In fact this method gives the nonlinear term Nu and the solution of the studied problem as a power series. The proposed iterative procedure gives on the one hand a computationally efficient formulation with an acceleration of convergence rate and on the other hand finds the solution without any discretization, linearization or restrictive assumptions. The comparison of our results with those of numerical treatment and other earlier works shows clearly the higher accuracy and efficiency of the used Modified Decomposition Method.
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Calculation of shielding thickness by combining the LTSN and Decomposition methods
International Nuclear Information System (INIS)
Borges, Volnei; Vilhena, Marco T. de
1997-01-01
A combination of the LTS N and Decomposition methods is reported to shielding thickness calculation. The angular flux is evaluated solving a transport problem in planar geometry considering the S N approximation, anisotropic scattering and one-group of energy. The Laplace transform is applied in the set of S N equations. The transformed angular flux is then obtained solving a transcendental equation and the angular flux is restored by the Heaviside expansion technique. The scalar flux is attained integrating the angular flux by Gaussian quadrature scheme. On the other hand, the scalar flux is linearly related to the dose rate through the mass and energy absorption coefficient. The shielding thickness is obtained solving a transcendental equation resulting from the application of the LTS N approach by the Decomposition methods. Numerical simulations are reported. (author). 6 refs., 3 tabs
A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System
Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang
2018-01-01
This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.
International Nuclear Information System (INIS)
Kaya, Dogan; El-Sayed, Salah M.
2003-01-01
In this Letter we present an Adomian's decomposition method (shortly ADM) for obtaining the numerical soliton-like solutions of the potential Kadomtsev-Petviashvili (shortly PKP) equation. We will prove the convergence of the ADM. We obtain the exact and numerical solitary-wave solutions of the PKP equation for certain initial conditions. Then ADM yields the analytic approximate solution with fast convergence rate and high accuracy through previous works. The numerical solutions are compared with the known analytical solutions
Spectral element method for wave propagation on irregular domains
Indian Academy of Sciences (India)
Yan Hui Geng
2018-03-14
Mar 14, 2018 ... Abstract. A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the ...
Spectral element method for wave propagation on irregular domains
Indian Academy of Sciences (India)
A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the physical space are ...
A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.
Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V
2014-01-01
The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility
Mode decomposition methods for flows in high-contrast porous media. A global approach
Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.
2014-01-01
We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.
The use of Adomian decomposition method for solving problems in calculus of variations
Directory of Open Access Journals (Sweden)
Mehdi Dehghan
2006-01-01
Full Text Available In this paper, a numerical method is presented for finding the solution of some variational problems. The main objective is to find the solution of an ordinary differential equation which arises from the variational problem. This work is done using Adomian decomposition method which is a powerful tool for solving large amount of problems. In this approach, the solution is found in the form of a convergent power series with easily computed components. To show the efficiency of the method, numerical results are presented.
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Directory of Open Access Journals (Sweden)
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.
Solving Hammerstein Type Integral Equation by New Discrete Adomian Decomposition Methods
Directory of Open Access Journals (Sweden)
Huda O. Bakodah
2013-01-01
Full Text Available New discrete Adomian decomposition methods are presented by using some identified Clenshaw-Curtis quadrature rules. We investigate two mixed quadrature rules one of precision five and the other of precision seven. The first rule is formed by using the Fejér second rule of precision three and Simpson rule of precision three, while the second rule is formed by using the Fejér second rule of precision five and the Boole rule of precision five. Our methods were applied to a nonlinear integral equation of the Hammerstein type and some examples are given to illustrate the validity of our methods.
An optimized ensemble local mean decomposition method for fault detection of mechanical components
International Nuclear Information System (INIS)
Zhang, Chao; Chen, Shuai; Wang, Jianguo; Li, Zhixiong; Hu, Chao; Zhang, Xiaogang
2017-01-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error ( Relative RMSE ) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE , corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions. (paper)
An optimized ensemble local mean decomposition method for fault detection of mechanical components
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Mode decomposition methods for flows in high-contrast porous media. Global-local approach
Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.
2013-01-01
In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.
Mode decomposition methods for flows in high-contrast porous media. Global-local approach
Ghommem, Mehdi
2013-11-01
In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.
Frequency-domain method for separating signal and noise
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A new method for separation of signal and noise (SSN) is put forward. Frequency is redefined according to the features of signal and its derivative in the sampl ing time interval, thus double orthogonal basis (DOB) is constructed so that a signal can be precisely signified with a linear combination of low-frequency DOB . Under joint consideration in time domain (TD) and frequency domain (FD), a method on SSN with high accuracy is derived and a matched algorithm is designed and analyzed. This method is applicable to SSN in multiple frequency bands, and convenient in applying signal characteristics in TD and FD synthetically with highe raccuracy.
Frequency-domain method for separating signal and noise
Institute of Scientific and Technical Information of China (English)
王正明; 段晓君
2000-01-01
A new method for separation of signal and noise (SSN) is put forward. Frequency is redefined according to the features of signal and its derivative in the sampling time interval, thus double orthogonal basis (DOB) is constructed so that a signal can be precisely signified with a linear combination of low-frequency DOB. Under joint consideration in time domain (TD) and frequency domain (FD), a method on SSN with high accuracy is derived and a matched algorithm is designed and analyzed. This method is applicable to SSN in multiple frequency bands, and convenient in applying signal characteristics in TD and FD synthetically with higher accuracy.
A pseudospectral collocation time-domain method for diffractive optics
DEFF Research Database (Denmark)
Dinesen, P.G.; Hesthaven, J.S.; Lynov, Jens-Peter
2000-01-01
We present a pseudospectral method for the analysis of diffractive optical elements. The method computes a direct time-domain solution of Maxwell's equations and is applied to solving wave propagation in 2D diffractive optical elements. (C) 2000 IMACS. Published by Elsevier Science B.V. All rights...
DRK methods for time-domain oscillator simulation
Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.
High-purity Cu nanocrystal synthesis by a dynamic decomposition method
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-12-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.
Spectral decomposition in advection-diffusion analysis by finite element methods
International Nuclear Information System (INIS)
Nickell, R.E.; Gartling, D.K.; Strang, G.
1978-01-01
In a recent study of the convergence properties of finite element methods in nonlinear fluid mechanics, an indirect approach was taken. A two-dimensional example with a known exact solution was chosen as the vehicle for the study, and various mesh refinements were tested in an attempt to extract information on the effect of the local Reynolds number. However, more direct approaches are usually preferred. In this study one such direct approach is followed, based upon the spectral decomposition of the solution operator. Spectral decomposition is widely employed as a solution technique for linear structural dynamics problems and can be applied readily to linear, transient heat transfer analysis; in this case, the extension to nonlinear problems is of interest. It was shown previously that spectral techniques were applicable to stiff systems of rate equations, while recent studies of geometrically and materially nonlinear structural dynamics have demonstrated the increased information content of the numerical results. The use of spectral decomposition in nonlinear problems of heat and mass transfer would be expected to yield equally increased flow of information to the analyst, and this information could include a quantitative comparison of various solution strategies, meshes, and element hierarchies
International Nuclear Information System (INIS)
Goncalves, G.A.; Bogado Leite, S.Q.; Vilhena, M.T. de
2009-01-01
An analytical solution has been obtained for the one-speed stationary neutron transport problem, in an infinitely long cylinder with anisotropic scattering by the decomposition method. Series expansions of the angular flux distribution are proposed in terms of suitably constructed functions, recursively obtainable from the isotropic solution, to take into account anisotropy. As for the isotropic problem, an accurate closed-form solution was chosen for the problem with internal source and constant incident radiation, obtained from an integral transformation technique and the F N method
Directory of Open Access Journals (Sweden)
Zhao-Qing Wang
2014-01-01
Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
International Nuclear Information System (INIS)
Sergienko, I.V.; Golodnikov, A.N.
1984-01-01
This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints
Hybrid subgroup decomposition method for solving fine-group eigenvalue transport problems
International Nuclear Information System (INIS)
Yasseri, Saam; Rahnema, Farzad
2014-01-01
Highlights: • An acceleration technique for solving fine-group eigenvalue transport problems. • Coarse-group quasi transport theory to solve coarse-group eigenvalue transport problems. • Consistent and inconsistent formulations for coarse-group quasi transport theory. • Computational efficiency amplified by a factor of 2 using hybrid SGD for 1D BWR problem. - Abstract: In this paper, a new hybrid method for solving fine-group eigenvalue transport problems is developed. This method extends the subgroup decomposition method to efficiently couple a new coarse-group quasi transport theory with a set of fixed-source transport decomposition sweeps to obtain the fine-group transport solution. The advantages of the quasi transport theory are its high accuracy, straight-forward implementation and numerical stability. The hybrid method is analyzed for a 1D benchmark problem characteristic of boiling water reactors (BWR). It is shown that the method reproduces the fine-group transport solution with high accuracy while increasing the computational efficiency up to 12 times compared to direct fine-group transport calculations
CO2-laser decomposition method of carbonate for AMS 14C measurements
International Nuclear Information System (INIS)
Kitagawa, Hiroyuki
2013-01-01
A CO 2 laser decomposition method enabled the efficient preparation of carbonate samples for AMS 14 C measurement. Samples were loaded in a vacuum chamber and thermally decomposed using laser emission. CO 2 liberated from the carbonate was directly trapped in the cold finger trap of a small CO 2 reduction reactor and graphitized by a hydrogen gas reduction method using catalytic iron powder. The fraction modern values for 0.07–0.57 mg of carbon, obtained from 200 μm-diameter spots of IAEA-C1, varied with sample size in the range of 0.00072 ± 0.00003 to 0.00615 ± 0.00052. The contamination induced by the laser decomposition method and the following graphite handling was estimated to be 0.53 ± 0.21 μg of modern carbon, assuming a constant amount of extraneous carbon contamination. This method could also make it possible to avoid the time-consuming procedures of the conventional acid dissolution method that involves multiple complex steps for the preparation of carbonate samples.
Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.
2017-08-01
Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
Solution of the isotopic depletion equation using decomposition method and analytical solution
Energy Technology Data Exchange (ETDEWEB)
Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: fprata@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)
Solution of the isotopic depletion equation using decomposition method and analytical solution
International Nuclear Information System (INIS)
Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S.
2011-01-01
In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)
Calculation and decomposition of spot price using interior point nonlinear optimisation methods
International Nuclear Information System (INIS)
Xie, K.; Song, Y.H.
2004-01-01
Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)
A domian Decomposition Method for Transient Neutron Transport with Pomrning-Eddington Approximation
International Nuclear Information System (INIS)
Hendi, A.A.; Abulwafa, E.E.
2008-01-01
The time-dependent neutron transport problem is approximated using the Pomraning-Eddington approximation. This approximation is two-flux approximation that expands the angular intensity in terms of the energy density and the net flux. This approximation converts the integro-differential Boltzmann equation into two first order differential equations. The A domian decomposition method that used to solve the linear or nonlinear differential equations is used to solve the resultant two differential equations to find the neutron energy density and net flux, which can be used to calculate the neutron angular intensity through the Pomraning-Eddington approximation
Controlled decomposition and oxidation: A treatment method for gaseous process effluents
Mckinley, Roger J. B., Sr.
1990-01-01
The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.
Methods of use of cellulose binding domain proteins
Shoseyov, Oded; Shpiegl, Itai; Goldstein, Marc A.; Doi, Roy H.
1997-01-01
A cellulose binding domain (CBD) having a high affinity for crystalline cellulose and chitin is disclosed, along with methods for the molecular cloning and recombinant production thereof. Fusion products comprising the CBD and a second protein are likewise described. A wide range of applications are contemplated for both the CBD and the fusion products, including drug delivery, affinity separations, and diagnostic techniques.
Methods of detection using a cellulose binding domain fusion product
Shoseyov, Oded; Shpiegl, Itai; Goldstein, Marc A.; Doi, Roy H.
1999-01-01
A cellulose binding domain (CBD) having a high affinity for crystalline cellulose and chitin is disclosed, along with methods for the molecular cloning and recombinant production thereof. Fusion products comprising the CBD and a second protein are likewise described. A wide range of applications are contemplated for both the CBD and the fusion products, including drug delivery, affinity separations, and diagnostic techniques.
A Frequency Domain Design Method For Sampled-Data Compensators
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Jannerup, Ole Erik
1990-01-01
A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...
Rothe's method for parabolic equations on non-cylindrical domains
Czech Academy of Sciences Publication Activity Database
Dasht, J.; Engström, J.; Kufner, Alois; Persson, L.E.
2006-01-01
Roč. 1, č. 1 (2006), s. 59-80 ISSN 0973-2306 Institutional research plan: CEZ:AV0Z10190503 Keywords : parabolic equations * non-cylindrical domains * Rothe's method * time-discretization Subject RIV: BA - General Mathematics
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
International Nuclear Information System (INIS)
Rahim, Ismail; Nomura, Shinfuku; Mukasa, Shinobu; Toyota, Hiromichi
2015-01-01
This research involves two in-liquid plasma methods of methane hydrate decomposition, one using radio frequency wave (RF) irradiation and the other microwave radiation (MW). The ultimate goal of this research is to develop a practical process for decomposition of methane hydrate directly at the subsea site for fuel gas production. The mechanism for methane hydrate decomposition begins with the dissociation process of methane hydrate formed by CH_4 and water. The process continues with the simultaneously occurring steam methane reforming process and methane cracking reaction, during which the methane hydrate is decomposed releasing CH_4 into H_2, CO and other by-products. It was found that methane hydrate can be decomposed with a faster rate of CH_4 release using microwave irradiation over that using radio frequency irradiation. However, the radio frequency plasma method produces hydrogen with a purity of 63.1% and a CH conversion ratio of 99.1%, which is higher than using microwave plasma method which produces hydrogen with a purity of 42.1% and CH_4 conversion ratio of 85.5%. - Highlights: • The decomposition of methane hydrate is proposed using plasma in-liquid method. • Synthetic methane hydrate is used as the sample for decomposition in plasma. • Hydrogen can be produced from decomposition of methane hydrate. • Hydrogen purity is higher when using radio frequency stimulation.
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising
Directory of Open Access Journals (Sweden)
Yan-Fang Sang
2010-06-01
Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.
Directory of Open Access Journals (Sweden)
Xiao-Ying Qin
2014-01-01
Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.
An integrated condition-monitoring method for a milling process using reduced decomposition features
International Nuclear Information System (INIS)
Liu, Jie; Wu, Bo; Hu, Youmin; Wang, Yan
2017-01-01
Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification. (paper)
Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu
2017-12-01
The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.
Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods
Directory of Open Access Journals (Sweden)
Asieh Mansouri
2018-01-01
Full Text Available Background Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA that was measured using LogMAR (logarithm of the minimum angle of resolution. The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212. The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people
Carpentier, Pierre-Luc
In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high
A new decomposition method for parallel processing multi-level optimization
International Nuclear Information System (INIS)
Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon
2002-01-01
In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method
A Walking Method for Non-Decomposition Intersection and Union of Arbitrary Polygons and Polyhedrons
Energy Technology Data Exchange (ETDEWEB)
Graham, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Yao, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-28
We present a method for computing the intersection and union of non- convex polyhedrons without decomposition in O(n log n) time, where n is the total number of faces of both polyhedrons. We include an accompanying Python package which addresses many of the practical issues associated with implementation and serves as a proof of concept. The key to the method is that by considering the edges of the original ob- jects and the intersections between faces as walking routes, we can e ciently nd the boundary of the intersection of arbitrary objects using directional walks, thus handling the concave case in a natural manner. The method also easily extends to plane slicing and non-convex polyhedron unions, and both the polyhedron and its constituent faces may be non-convex.
Application of empirical mode decomposition method for characterization of random vibration signals
Directory of Open Access Journals (Sweden)
Setyamartana Parman
2016-07-01
Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.
Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar
2017-04-22
Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Energy Technology Data Exchange (ETDEWEB)
Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States)
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Application of decomposition method and inverse prediction of parameters in a moving fin
International Nuclear Information System (INIS)
Singla, Rohit K.; Das, Ranjan
2014-01-01
Highlights: • Adomian decomposition is used to study a moving fin. • Effects of different parameters on the temperature and efficiency are studied. • Binary-coded GA is used to solve an inverse problem. • Sensitivity analyses of important parameters are carried out. • Measurement error up to 8% is found to be tolerable. - Abstract: The application of the Adomian decomposition method (ADM) is extended to study a conductive–convective and radiating moving fin having variable thermal conductivity. Next, through an inverse approach, ADM in conjunction with a binary-coded genetic algorithm (GA) is also applied for estimation of unknown properties in order to satisfy a given temperature distribution. ADM being one of the widely-used numerical methods for solving non-linear equations, the required temperature field has been obtained using a forward method involving ADM. In the forward problem, the temperature field and efficiency are investigated for various parameters such as convection–conduction parameter, radiation–conduction parameter, Peclet number, convection sink temperature, radiation sink temperature, and dimensionless thermal conductivity. Additionally, in the inverse problem, the effect of random measurement errors, iterative variation of parameters, sensitivity coefficients of unknown parameters are investigated. The performance of GA is compared with few other optimization methods as well as with different temperature measurement points. It is found from the present study that the results obtained from ADM are in good agreement with the results of the differential transformation method available in the literature. It is also observed that for satisfactory reconstruction of the temperature field, the measurement error should be within 8% and the temperature field is strongly dependent on the speed than thermal parameters of the moving fin
Determination of boron in graphite by a wet oxidation decomposition/curcumin photometric method
International Nuclear Information System (INIS)
Watanabe, Kazuo; Toida, Yukio
1995-01-01
The wet oxidation decomposition of graphite materials has been studied for the accurate determination of boron using a curcumin photometric method. A graphite sample of 0.5 g was completely decomposed with a mixture of 5 ml of sulfuric acid, 3 ml of perchloric acid, 0.5 ml of nitric acid and 5 ml of phosphoric acid in a silica 100 ml Erlenmeyer flask fitted with an air condenser at 200degC. Any excess of perchloric and nitric acids in the solution was removed by heating on a hot plate at 150degC. Boron was distilled with methanol, and then recovered in 10 ml of 0.2 M sodium hydroxide. The solution was evaporated to dryness. To the residue were added curcumin-acetic acid and sulfuric-acetic acid. The mixture was diluted with ethanol, and the absorbance at 555 nm was measured. The addition of 5 ml of phosphoric acid proved to be effective to prevent any volatilization loss of boron during decomposition of the graphite sample and evaporation of the resulting solution. The relative standard deviation was 4-8% for samples with 2 μg g -1 levels of boron. The results on CRMs JAERI-G5 and G6 were in good agreement with the certified values. (author)
Thermal decomposition of synthetic antlerite prepared by microwave-assisted hydrothermal method
Energy Technology Data Exchange (ETDEWEB)
Koga, Nobuyoshi [Chemistry Laboratory, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima 739-8524 (Japan)], E-mail: nkoga@hiroshima-u.ac.jp; Mako, Akira; Kimizu, Takaaki; Tanaka, Yuu [Chemistry Laboratory, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima 739-8524 (Japan)
2008-01-30
Copper(II) hydroxide sulfate was synthesized by a microwave-assisted hydrothermal method from a mixed solution of CuSO{sub 4} and urea. Needle-like crystals of ca. 20-30 {mu}m in length precipitated by irradiating microwave for 1 min were characterized as Cu{sub 3}(OH){sub 4}SO{sub 4} corresponding to mineral antlerite. The reaction pathway and kinetics of the thermal decomposition of the synthetic antlerite Cu{sub 3}(OH){sub 4}SO{sub 4} were investigated by means of thermoanalytical techniques complemented by powder X-ray diffractometry and microscopic observations. The thermal decomposition of Cu{sub 3}(OH){sub 4}SO{sub 4} proceeded via two separated reaction steps of dehydroxylation and desulfation to produce CuO, where crystalline phases of Cu{sub 2}OSO{sub 4} and CuO appeared as the intermediate products. The kinetic characteristics of the respective steps were discussed in comparison with those of the synthetic brochantite Cu{sub 4}(OH){sub 6}SO{sub 4} reported previously.
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Finite-Difference Frequency-Domain Method in Nanophotonics
DEFF Research Database (Denmark)
Ivinskaya, Aliaksandra
Optics and photonics are exciting, rapidly developing fields building their success largely on use of more and more elaborate artificially made, nanostructured materials. To further advance our understanding of light-matter interactions in these complicated artificial media, numerical modeling...... is often indispensable. This thesis presents the development of rigorous finite-difference method, a very general tool to solve Maxwell’s equations in arbitrary geometries in three dimensions, with an emphasis on the frequency-domain formulation. Enhanced performance of the perfectly matched layers...... is obtained through free space squeezing technique, and nonuniform orthogonal grids are built to greatly improve the accuracy of simulations of highly heterogeneous nanostructures. Examples of the use of the finite-difference frequency-domain method in this thesis range from simulating localized modes...
Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi
2008-07-01
We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Energy Technology Data Exchange (ETDEWEB)
Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.
1997-02-01
We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
A decomposition method for network-constrained unit commitment with AC power flow constraints
International Nuclear Information System (INIS)
Bai, Yang; Zhong, Haiwang; Xia, Qing; Kang, Chongqing; Xie, Le
2015-01-01
To meet the increasingly high requirement of smart grid operations, considering AC power flow constraints in the NCUC (network-constrained unit commitment) is of great significance in terms of both security and economy. This paper proposes a decomposition method to solve NCUC with AC power flow constraints. With conic approximations of the AC power flow equations, the master problem is formulated as a MISOCP (mixed integer second-order cone programming) model. The key advantage of this model is that the active power and reactive power are co-optimised, and the transmission losses are considered. With the AC optimal power flow model, the AC feasibility of the UC result of the master problem is checked in subproblems. If infeasibility is detected, feedback constraints are generated based on the sensitivity of bus voltages to a change in the unit reactive power generation. They are then introduced into the master problem in the next iteration until all AC violations are eliminated. A 6-bus system, a modified IEEE 30-bus system and the IEEE 118-bus system are used to validate the performance of the proposed method, which provides a satisfactory solution with approximately 44-fold greater computational efficiency. - Highlights: • A decomposition method is proposed to solve the NCUC with AC power flow constraints • The master problem considers active power, reactive power and transmission losses. • OPF-based subproblems check the AC feasibility using parallel computing techniques. • An effective feedback constraint interacts between the master problem and subproblem. • Computational efficiency is significantly improved with satisfactory accuracy
International Nuclear Information System (INIS)
Pilipchuk, L. A.; Pilipchuk, A. S.
2015-01-01
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure
Directory of Open Access Journals (Sweden)
H.-J. Chen
2012-07-01
Full Text Available The effect of tidal triggering on earthquake occurrence has been controversial for many years. This study considered earthquakes that occurred near Taiwan between 1973 and 2008. Because earthquake data are nonlinear and non-stationary, we applied the empirical mode decomposition (EMD method to analyze the temporal variations in the number of daily earthquakes to investigate the effect of tidal triggering. We compared the results obtained from the non-declustered catalog with those from two kinds of declustered catalogs and discuss the aftershock effect on the EMD-based analysis. We also investigated stacking the data based on in-phase phenomena of theoretical Earth tides with statistical significance tests. Our results show that the effects of tidal triggering, particularly the lunar tidal effect, can be extracted from the raw seismicity data using the approach proposed here. Our results suggest that the lunar tidal force is likely a factor in the triggering of earthquakes.
Privacy Data Decomposition and Discretization Method for SaaS Services
Directory of Open Access Journals (Sweden)
Changbo Ke
2017-01-01
Full Text Available In cloud computing, user functional requirements are satisfied through service composition. However, due to the process of interaction and sharing among SaaS services, user privacy data tends to be illegally disclosed to the service participants. In this paper, we propose a privacy data decomposition and discretization method for SaaS services. First, according to logic between the data, we classify the privacy data into discrete privacy data and continuous privacy data. Next, in order to protect the user privacy information, continuous data chains are decomposed into discrete data chain, and discrete data chains are prevented from being synthesized into continuous data chains. Finally, we propose a protection framework for privacy data and demonstrate its correctness and feasibility with experiments.
Srivastava, Madhur; Freed, Jack H
2017-11-16
Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.
Energy Technology Data Exchange (ETDEWEB)
Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)
2015-11-30
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.
International Nuclear Information System (INIS)
Tsuji, Masashi; Shimazu, Yoichiro; Michishita, Hiroshi
2005-01-01
A new method for evaluating the decay ratios in a boiling water reactor (BWR) using the singular value decomposition (SVD) method had been proposed. In this method, a signal component closely related to the BWR stability can be extracted from independent components of the neutron noise signal decomposed by the SVD method. However, real-time stability monitoring by the SVD method requires an efficient procedure for screening such components. For efficient screening, an artificial neural network (ANN) with three layers was adopted. The trained ANN was actually applied to decomposed components of local power range monitor (LPRM) signals that were measured in stability experiments conducted in the Ringhals-1 BWR. In each LPRM signal, multiple candidates were screened from the decomposed components. However, decay ratios could be estimated by introducing appropriate criterions for selecting the most suitable component among the candidates. The estimated decay ratios are almost identical to those evaluated by visual screening in a previous study. The selected components commonly have the largest singular value, the largest decay ratio and the least squared fitting error among the candidates. By virtue of excellent screening performance of the trained ANN, the real-time stability monitoring by the SVD method can be applied in practice. (author)
International Nuclear Information System (INIS)
Lenoir, A.
2008-01-01
We focus in this thesis, on the optimization process of large systems under uncertainty, and more specifically on solving the class of so-called deterministic equivalents with the help of splitting methods. The underlying application we have in mind is the electricity unit commitment problem under climate, market and energy consumption randomness, arising at EDF. We set the natural time-space-randomness couplings related to this application and we propose two new discretization schemes to tackle the randomness one, each of them based on non-parametric estimation of conditional expectations. This constitute an alternative to the usual scenario tree construction. We use the mathematical model consisting of the sum of two convex functions, a separable one and a coupling one. On the one hand, this simplified model offers a general framework to study decomposition-coordination algorithms by elapsing technicality due to a particular choice of subsystems. On the other hand, the convexity assumption allows to take advantage of monotone operators theory and to identify proximal methods as fixed point algorithms. We underlie the differential properties of the generalized reactions we are looking for a fixed point in order to derive bounds on the speed of convergence. Then we examine two families of decomposition-coordination algorithms resulting from operator splitting methods, namely Forward-Backward and Rachford methods. We suggest some practical method of acceleration of the Rachford class methods. To this end, we analyze the method from a theoretical point of view, furnishing as a byproduct explanations to some numerical observations. Then we propose as a response some improvements. Among them, an automatic updating strategy of scaling factors can correct a potential bad initial choice. The convergence proof is made easier thanks to stability results of some operator composition with respect to graphical convergence provided before. We also submit the idea of introducing
Directory of Open Access Journals (Sweden)
Dumitru Baleanu
2014-01-01
Full Text Available We perform a comparison between the fractional iteration and decomposition methods applied to the wave equation on Cantor set. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
DEFF Research Database (Denmark)
Ganji, D.D; Miansari, Mo; B, Ganjavi
2008-01-01
In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions are consid......In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions...
Al-garadi, Mohammed Ali; Varathan, Kasturi Dewi; Ravana, Sri Devi
2017-02-01
Online social networks (OSNs) have become a vital part of everyday living. OSNs provide researchers and scientists with unique prospects to comprehend individuals on a scale and to analyze human behavioral patterns. Influential spreaders identification is an important subject in understanding the dynamics of information diffusion in OSNs. Targeting these influential spreaders is significant in planning the techniques for accelerating the propagation of information that is useful for various applications, such as viral marketing applications or blocking the diffusion of annoying information (spreading of viruses, rumors, online negative behaviors, and cyberbullying). Existing K-core decomposition methods consider links equally when calculating the influential spreaders for unweighted networks. Alternatively, the proposed link weights are based only on the degree of nodes. Thus, if a node is linked to high-degree nodes, then this node will receive high weight and is treated as an important node. Conversely, the degree of nodes in OSN context does not always provide accurate influence of users. In the present study, we improve the K-core method for OSNs by proposing a novel link-weighting method based on the interaction among users. The proposed method is based on the observation that the interaction of users is a significant factor in quantifying the spreading capability of user in OSNs. The tracking of diffusion links in the real spreading dynamics of information verifies the effectiveness of our proposed method for identifying influential spreaders in OSNs as compared with degree centrality, PageRank, and original K-core.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
International Nuclear Information System (INIS)
Li, Chengwei; Zhan, Liwei
2015-01-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)
Energy Technology Data Exchange (ETDEWEB)
1933-09-09
A method of pyrolytic decomposition and coking of a mixture of finely distributed of solid or semi-solid carbonaceous material and hydrocarbon oils is disclosed whereby the mixture is exposed to a decomposition temperature and later is brought into the zone of decomposition where vapors are separated from the unvaporized residue and the vapors are exposed to fractional condensation for the purpose of obtaining a light product of distillation. The method is characterized by the mixture being exposed to heating by means of indirect exchange of heat in a heating zone or by means of a direct addition of a hot heat-conducting medium, or by means of both the mentioned indirect exchange of heat and direct heat under such conditions that the unvaporized residue obtained from the thus-heated mixture in the decomposition zone is transformed to solid coke in this zone by being heated to coking temperature in a comparatively thin layer on the surface of the decomposition zone that has been heated to a high temperature.
Muramatsu, K.; Furumi, S.; Hayashi, A.; Shiono, Y.; Ono, A.; Fujiwara, N.; Daigo, M.; Ochiai, F.
We have developed the ``pattern decomposition method'' based on linear spectral mixing of ground objects for n-dimensional satellite data. In this method, spectral response patterns for each pixel in an image are decomposed into three components using three standard spectral shape patterns determined from the image data. Applying this method to AMSS (Airborne Multi-Spectral Scanner) data, eighteen-dimensional data are successfully transformed into three-dimensional data. Using the three components, we have developed a new vegetation index in which all the multispectral data are reflected. We consider that the index should be linear to the amount of vegetation and vegetation vigor. To validate the index, its relations to vegetation types, vegetation cover ratio, and chlorophyll contents of a leaf were studied using spectral reflectance data measured in the field with a spectrometer. The index was sensitive to vegetation types and vegetation vigor. This method and index are very useful for assessment of vegetation vigor, classifying land cover types and monitoring vegetation changes
Lesion insertion in the projection domain: Methods and initial results
International Nuclear Information System (INIS)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-01-01
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Lesion insertion in the projection domain: Methods and initial results
Energy Technology Data Exchange (ETDEWEB)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-12-15
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Computational electrodynamics the finite-difference time-domain method
Taflove, Allen
2005-01-01
This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.
Lesion insertion in the projection domain: Methods and initial results.
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-12-01
phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors' method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors' method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions.
Directory of Open Access Journals (Sweden)
Zhiwen Lu
2016-01-01
Full Text Available Multicrack localization in operating rotor systems is still a challenge today. Focusing on this challenge, a new approach based on proper orthogonal decomposition (POD is proposed for multicrack localization in rotors. A two-disc rotor-bearing system with breathing cracks is established by the finite element method and simulated sensors are distributed along the rotor to obtain the steady-state transverse responses required by POD. Based on the discontinuities introduced in the proper orthogonal modes (POMs at the locations of cracks, the characteristic POM (CPOM, which is sensitive to crack locations and robust to noise, is selected for cracks localization. Instead of using the CPOM directly, due to its difficulty to localize incipient cracks, damage indexes using fractal dimension (FD and gapped smoothing method (GSM are adopted, in order to extract the locations more efficiently. The method proposed in this work is validated to be effective for multicrack localization in rotors by numerical experiments on rotors in different crack configuration cases considering the effects of noise. In addition, the feasibility of using fewer sensors is also investigated.
Decomposition of Taiwan local black monazite by hydrothermal and soda fusion methods
International Nuclear Information System (INIS)
Miao, Y.W.; Horng, J.S.
1988-01-01
Along the south-west coast of Taiwan is about 550,000 metric tons of heavy sand deposit containing about 10% black monazite. The institute has developed a separation process to recover the individual rare earths and the developed process has been commercialized by a local private company. The decomposition of the local black monazite by sodium hydroxide through hydrothermal and fusion methods has been investigated. In the hydrothermal process 45 wt. % of aqueous alkali solution was used in an autoclave. In the fusion process, caustic soda (98% NaOH) was employed in an open cylindrical reactor. The same product of hydrous rare earth oxides were obtained and then dissolved by hydrochloric acid and the pH adjusted in order to separate the thorium from the rare earths. After filtration, the filtrate contained rare earth chloride and the cake contained mainly the silica and thorium hydroxide. Both methods give a yield of 90% with respect to the rare earths recovery. A detailed description of operation and comparison of the two methods is given
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
Perfectly matched layer for the time domain finite element method
International Nuclear Information System (INIS)
Rylander, Thomas; Jin Jianming
2004-01-01
A new perfectly matched layer (PML) formulation for the time domain finite element method is described and tested for Maxwell's equations. In particular, we focus on the time integration scheme which is based on Galerkin's method with a temporally piecewise linear expansion of the electric field. The time stepping scheme is constructed by forming a linear combination of exact and trapezoidal integration applied to the temporal weak form, which reduces to the well-known Newmark scheme in the case without PML. Extensive numerical tests on scattering from infinitely long metal cylinders in two dimensions show good accuracy and no signs of instabilities. For a circular cylinder, the proposed scheme indicates the expected second order convergence toward the analytic solution and gives less than 2% root-mean-square error in the bistatic radar cross section (RCS) for resolutions with more than 10 points per wavelength. An ogival cylinder, which has sharp corners supporting field singularities, shows similar accuracy in the monostatic RCS
Effect of Preparation Method on Catalytic Properties of Co-Mn-Al Mixed Oxides for N2O Decomposition.
Czech Academy of Sciences Publication Activity Database
Klyushina, A.; Pacultová, K.; Karásková, K.; Jirátová, Květa; Ritz, M.; Fridrichová, D.; Volodorskaja, A.; Obalová, L.
2016-01-01
Roč. 425, DEC 15 (2016), s. 237-247 ISSN 1381-1169 R&D Projects: GA ČR GA14-13750S Institutional support: RVO:67985858 Keywords : Co-Mn-Al mixed oxide * N2O decomposition * preparation methods Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 4.211, year: 2016
The application of low-rank and sparse decomposition method in the field of climatology
Gupta, Nitika; Bhaskaran, Prasad K.
2018-04-01
The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.
Directory of Open Access Journals (Sweden)
Xuejun Chen
2014-01-01
Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.
International Nuclear Information System (INIS)
Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.
1997-01-01
We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction
Adomian decomposition method for solving the telegraph equation in charged particle transport
International Nuclear Information System (INIS)
Abdou, M.A.
2005-01-01
In this paper, the analysis for the telegraph equation in case of isotropic small angle scattering from the Boltzmann transport equation for charged particle is presented. The Adomian decomposition is used to solve the telegraph equation. By means of MAPLE the Adomian polynomials of obtained series (ADM) solution have been calculated. The behaviour of the distribution function are shown graphically. The results reported in this article provide further evidence of the usefulness of Adomain decomposition for obtaining solution of linear and nonlinear problems
Directory of Open Access Journals (Sweden)
Norhasimah Mahiddin
2014-01-01
Full Text Available The modified decomposition method (MDM and homotopy perturbation method (HPM are applied to obtain the approximate solution of the nonlinear model of tumour invasion and metastasis. The study highlights the significant features of the employed methods and their ability to handle nonlinear partial differential equations. The methods do not need linearization and weak nonlinearity assumptions. Although the main difference between MDM and Adomian decomposition method (ADM is a slight variation in the definition of the initial condition, modification eliminates massive computation work. The approximate analytical solution obtained by MDM logically contains the solution obtained by HPM. It shows that HPM does not involve the Adomian polynomials when dealing with nonlinear problems.
International Nuclear Information System (INIS)
Noh, J. M.; Yoo, J. W.; Joo, H. K.
2004-01-01
In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)
Energy Technology Data Exchange (ETDEWEB)
Donald Estep; Michael Holst; Simon Tavener
2010-02-08
This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.
Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.
Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis
2017-07-01
T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Yong; Otani, Akihito; Maginn, Edward J
2015-08-11
Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.
A simple method for decomposition of peracetic acid in a microalgal cultivation system.
Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won
2015-03-01
A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.
Directory of Open Access Journals (Sweden)
Orlando Soriano-Vargas
2016-12-01
Full Text Available Spinodal decomposition was studied during aging of Fe-Cr alloys by means of the numerical solution of the linear and nonlinear Cahn-Hilliard differential partial equations using the explicit finite difference method. Results of the numerical simulation permitted to describe appropriately the mechanism, morphology and kinetics of phase decomposition during the isothermal aging of these alloys. The growth kinetics of phase decomposition was observed to occur very slowly during the early stages of aging and it increased considerably as the aging progressed. The nonlinear equation was observed to be more suitable for describing the early stages of spinodal decomposition than the linear one.
Frequency domain methods applied to forecasting electricity markets
International Nuclear Information System (INIS)
Trapero, Juan R.; Pedregal, Diego J.
2009-01-01
The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)
International Nuclear Information System (INIS)
Goncalves, Glenio A.; Bodmann, Bardo; Bogado, Sergio; Vilhena, Marco T.
2008-01-01
Analytical solutions for neutron transport in cylindrical geometry is available for isotropic problems, but to the best of our knowledge for anisotropic problems are not available, yet. In this work, an analytical solution for the neutron transport equation in an infinite cylinder assuming anisotropic scattering is reported. Here we specialize the solution, without loss of generality, for the linearly anisotropic problem using the combined decomposition and HTS N methods. The key feature of this method consists in the application of the decomposition method to the anisotropic problem by virtue of the fact that the inverse of the operator associated to isotropic problem is well know and determined by the HTS N approach. So far, following the idea of the decomposition method, we apply this operator to the integral term, assuming that the angular flux appearing in the integrand is considered to be equal to the HTS N solution interpolated by polynomial considering only even powers. This leads to the first approximation for an anisotropic solution. Proceeding further, we replace this solution for the angular flux in the integral and apply again the inverse operator for the isotropic problem in the integral term and obtain a new approximation for the angular flux. This iterative procedure yields a closed form solution for the angular flux. This methodology can be generalized, in a straightforward manner, for transport problems with any degree of anisotropy. For the sake of illustration, we report numerical simulations for linearly anisotropic transport problems. (author)
International Nuclear Information System (INIS)
Detusheva, L.G.; Khankhasaeva, S.Ts.; Yurchenko, Eh.N.; Lazarenko, T.P.; Kozhevnikov, I.V.
1990-01-01
Method of quantitative IR spectroscopy was used to determine equilibrium constants of formation of H x PW 11 O 39 (7-x)- (1) from H y P 2 W 21 O 71 (6-Y)- and W 10 O 32 4- at pH 2.8-4.0 and its decomposition at pH 7-8. Equilibrium constant of (1) formation in logarithmic coordinates changes linearly with growth of initial concentration of H 3 PW 12 O 40 (2) from 0.005 to 0.1 mol/l. Equilibrium constant of (1) decomposition is characterized by complex dependence on initial concentration of (2) due to proceeding of parallel reactions. Equilibrium concentrations of compounds in solutions of tungstophosphoric heteropolyacid at pH 3.25 and 7.68, calculated according to determined equilibrium constants and determined by the method of NMR on 31 P nuclei, were correlated
Er2O3 coating development and improvisation by metal oxide decomposition method
International Nuclear Information System (INIS)
Rayjada, Pratipalsinh A.; Sircar, Amit; Raole, Prakash M.; Rahman, Raseel; Manocha, Lalit M.
2015-01-01
Compact, highly resistive and chemically as well as physically stable ceramic coatings are going to play vital role in successful and safe exploitation of tritium breeding and recovery system in the future fusion reactors. Due to its stability and high resistivity, Er 2 O 3 was initially studied for resistive coating application to mitigate Magneto Hydro Dynamic (MHD) forces in liquid Li cooled blanket concept. Subsequently, its excellence as tritium permeation barrier (TPB) was also revealed. Ever since, there is a continual thrust on studying its relevant properties and application methods among the fusion technology and materials community. Metal Oxide Decomposition is a chemical method of coating development. One of the major advantages of this process over most of the others is its simplicity and ability to coat complex structures swiftly. The component is dipped into a liquid solution of the Er 2 O 3 and subsequently withdrawn at an optimized constant speed, so as to leave a uniform wet layer on the surface. This can be repeated multiple times after drying the surface to obtain the required thickness. Subsequently, the component is heat treated to obtain crystalline uniform Er 2 O 3 coating over it. However, the porosity of the coatings and substrate oxidation are the challenges for in MOD method. We successfully develop Er 2 O 3 coating in cubic crystalline phase on P91 steel and fused silica substrates using 3 wt% erbium carboxylic acid solution in a solvent containing 50.5 wt% turpentine, 25.5 wt% n-butyl acetate, 8.4 wt% ethyl acetate, a stabilizer, and a viscosity adjustor. A dip coating system equipped with 800 C quartz tube furnace was used to prepare these coatings. The withdrawal speed was chosen as 72 mm/min from the literature survey. The crystallization and microstructure are studied as functions of heat treatment temperature in the range of 500-700 C. We also try to improvise the uniform coverage and porosity of the coating by altering the
Ag nanoparticles hosted in monolithic mesoporous silica by thermal decomposition method
International Nuclear Information System (INIS)
Chen Wei; Zhang Junying
2003-01-01
Ag nanoparticles were obtained by thermal decomposition of silver nitrate within pores of mesoporous silica. Microstructure of this composite was examined by X-ray diffraction and high-resolution transmission electron microscopy. Optical measurements for the nanocomposite show that Ag particle doping leads to a large red shift of the absorption edge
Czech Academy of Sciences Publication Activity Database
Frouz, Jan; Holásek, M.; Šourková, Monika
2003-01-01
Roč. 22, č. 4 (2003), s. 348-357 ISSN 1335-342X R&D Projects: GA ČR GA526/01/1055 Institutional research plan: CEZ:AV0Z6066911 Keywords : cellulose decomposition * methodology * soil Subject RIV: EH - Ecology, Behaviour Impact factor: 0.100, year: 2003
Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan
2017-08-01
Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.
Directory of Open Access Journals (Sweden)
Subanar Subanar
2006-01-01
Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.
International Nuclear Information System (INIS)
Duan Zhisheng; Wang Jinzhi; Yang Ying; Huang Lin
2009-01-01
This paper surveys frequency-domain and time-domain methods for feedback nonlinear systems and their possible applications to chaos control, coupled systems and complex dynamical networks. The absolute stability of Lur'e systems with single equilibrium and global properties of a class of pendulum-like systems with multi-equilibria are discussed. Time-domain and frequency-domain criteria for the convergence of solutions are presented. Some latest results on analysis and control of nonlinear systems with multiple equilibria and applications to chaos control are reviewed. Finally, new chaotic oscillating phenomena are shown in a pendulum-like system and a new nonlinear system with an attraction/repulsion function.
Investigation of domain walls in GMO crystals by conoscope method
International Nuclear Information System (INIS)
Radchenko, I.R.; Filimonova, L.A.
1993-01-01
The patterns of polarized beam interference (conoscopic patterns) enable assessment of orientation and parameters of crystal's optical indicatrix. The presented conoscopic patterns of gadolinium molybdate crystal in the vicinity to plane and wedge-live domain walls differ from conoscopic patterns of the crystals far away from these walls which allows to spear about changes occurring in the crystal in the vicinity to domain walls
Improved methods for nightside time domain Lunar Electromagnetic Sounding
Fuqua-Haviland, H.; Poppe, A. R.; Fatemi, S.; Delory, G. T.; De Pater, I.
2017-12-01
Time Domain Electromagnetic (TDEM) Sounding isolates induced magnetic fields to remotely deduce material properties at depth. The first step of performing TDEM Sounding at the Moon is to fully characterize the dynamic plasma environment, and isolate geophysically induced currents from concurrently present plasma currents. The transfer function method requires a two-point measurement: an upstream reference measuring the pristine solar wind, and one downstream near the Moon. This method was last performed during Apollo assuming the induced fields on the nightside of the Moon expand as in an undisturbed vacuum within the wake cavity [1]. Here we present an approach to isolating induction and performing TDEM with any two point magnetometer measurement at or near the surface of the Moon. Our models include a plasma induction model capturing the kinetic plasma environment within the wake cavity around a conducting Moon, and a geophysical forward model capturing induction in a vacuum. The combination of these two models enable the analysis of magnetometer data within the wake cavity. Plasma hybrid models use the upstream plasma conditions and interplanetary magnetic field (IMF) to capture the wake current systems formed around the Moon. The plasma kinetic equations are solved for ion particles with electrons as a charge-neutralizing fluid. These models accurately capture the large scale lunar wake dynamics for a variety of solar wind conditions: ion density, temperature, solar wind velocity, and IMF orientation [2]. Given the 3D orientation variability coupled with the large range of conditions seen within the lunar plasma environment, we characterize the environment one case at a time. The global electromagnetic induction response of the Moon in a vacuum has been solved numerically for a variety of electrical conductivity models using the finite-element method implemented within the COMSOL software. This model solves for the geophysically induced response in vacuum to
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
High-purity Cu nanocrystal synthesis by a dynamic decomposition method
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-01-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential sca...
International Nuclear Information System (INIS)
Jang, Inseok; Park, Jinkyun; Seong, Poonghyun
2011-01-01
Research highlights: → No evaluation method is available for operators' communication quality in NPPs. → To model this evaluation method, the Work Domain Analysis (WDA) method was found. → This proposed method was applied to NPP MCR operators. → The quality of operators' communication can be evaluated with the propose method. - Abstract: The evolution of work demands has seen industrial evolution itself evolve into the computerization of these demands, making systems more complex. This field is now known as the Complex Socio-Technical System. As communication failures are problems associated with Complex Socio-Technical Systems, it has been discovered that communication failures are the cause of many incidents and accidents in various industries, including the nuclear, aerospace and railway industries. Despite the fact that there have been many studies on the severity of communication failures, there is no evaluation method for operators' communication quality in Nuclear Power Plants (NPPs). Therefore, the objectives of this study are to develop an evaluation method for the quality of NPP Main Control Room (MCR) operators' communication and to apply the proposed method to operators in a full-scope simulator. To develop the proposed method, the Work Domain Analysis (WDA) method is introduced. Several characteristics of WDA, including Abstraction Decomposition Space (ADS) and the diagonal of ADS are the important points in developing an evaluation method for the quality of NPP MCR operators' communication. In addition, to apply the proposed method, nine teams working in NPPs participated in a field simulation. The results of this evaluation reveal that operators' communication quality improved as a greater proportion of the components in the developed evaluation criteria were mentioned. Therefore, the proposed method could be useful for evaluating the communication quality in any complex system.
A Method to Identify Nucleolus-Associated Chromatin Domains (NADs).
Carpentier, Marie-Christine; Picart-Picolo, Ariadna; Pontvianne, Frédéric
2018-01-01
The nuclear context needs to be taken into consideration to better understand the mechanisms shaping the epigenome and its organization, and therefore its impact on gene expression. For example, in Arabidopsis, heterochromatin is preferentially localized at the nuclear and the nucleolar periphery. Although chromatin domains associating with the nuclear periphery remain to be identified in plant cells, Nucleolus Associated chromatin Domains (NADs) can be identified thanks to a protocol allowing the isolation of pure nucleoli. We describe here the protocol enabling the identification of NADs in Arabidopsis. Providing the transfer of a nucleolus marker as described here in other crop species, this protocol is broadly applicable.
A Momentum-Exchange/Fictitious Domain-Lattice Boltzmann Method for Solving Particle Suspensions
Energy Technology Data Exchange (ETDEWEB)
Jeon, Seok Yun; Yoon, Joon Yong [Hanyang Univ., Seoul (Korea, Republic of); Kim, Chul Kyu [Korea Institute of Civil Engineering and Building Technology, Goyang (Korea, Republic of); Shin, Myung Seob [Korea Intellectual Property Office(KIPO), Daejeon (Korea, Republic of)
2016-06-15
This study presents a Lattice Boltzmann Method (LBM) coupled with a momentum-exchange approach/fictitious domain (MEA/FD) method for the simulation of particle suspensions. The method combines the advantages of the LB and the FD methods by using two unrelated meshes, namely, a Eulerian mesh for the flow domain and a Lagrangian mesh for the solid domain. The rigid body conditions are enforced by the momentum-exchange scheme in which the desired value of velocity is imposed directly in the particle inner domain by introducing a pseudo body force to satisfy the constraint of rigid body motion, which is the key idea of a fictitious domain (FD) method. The LB-MEA/FD method has been validated by simulating two different cases, and the results have been compared with those through other methods. The numerical evidence illustrated the capability and robustness of the present method for simulating particle suspensions.
A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching
Liang, Ru-Ze; Xie, Wei; Li, Weizhi; Wang, Hongqi; Wang, Jim Jing-Yan; Taylor, Lisa
2017-01-01
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching
Liang, Ru-Ze
2017-01-17
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
Directory of Open Access Journals (Sweden)
Antonio Gledson Goulart
2013-12-01
Full Text Available In this paper, the equation for the gravity wave spectra in mean atmosphere is analytically solved without linearization by the Adomian decomposition method. As a consequence, the nonlinear nature of problem is preserved and the errors found in the results are only due to the parameterization. The results, with the parameterization applied in the simulations, indicate that the linear solution of the equation is a good approximation only for heights shorter than ten kilometers, because the linearization the equation leads to a solution that does not correctly describe the kinetic energy spectra.
THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)
A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...
Overview of multi-input frequency domain modal testing methods with an emphasis on sine testing
Rost, Robert W.; Brown, David L.
1988-01-01
An overview of the current state of the art multiple-input, multiple-output modal testing technology is discussed. A very brief review of the current time domain methods is given. A detailed review of frequency and spatial domain methods is presented with an emphasis on sine testing.
Analysis of the Diffuse Domain Method for Second Order Elliptic Boundary Value Problems
Burger, Martin; Elvetun, Ole; Schlottbom, Matthias
2017-01-01
The diffuse domain method for partial differential equations on complicated geometries recently received strong attention in particular from practitioners, but many fundamental issues in the analysis are still widely open. In this paper, we study the diffuse domain method for approximating second
Directory of Open Access Journals (Sweden)
Levi Lopes Teixeira
2015-12-01
Full Text Available Time series forecasting is widely used in various areas of human knowledge, especially in the planning and strategic direction of companies. The success of this task depends on the forecasting techniques applied. In this paper, a hybrid approach to project time series is suggested. To validate the methodology, a time series already modeled by other authors was chosen, allowing the comparison of results. The proposed methodology includes the following techniques: wavelet shrinkage, wavelet decomposition at level r, and artificial neural networks (ANN. Firstly, a time series to be forecasted is submitted to the proposed wavelet filtering method, which decomposes it to components of trend and linear residue. Then, both are decomposed via level r wavelet decomposition, generating r + 1 Wavelet Components (WCs for each one; and then each WC is individually modeled by an ANN. Finally, the predictions for all WCs are linearly combined, producing forecasts to the underlying time series. For evaluating purposes, the time series of Canadian Lynx has been used, and all results achieved by the proposed method were better than others in existing literature.
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
A comparison of three time-domain anomaly detection methods
Energy Technology Data Exchange (ETDEWEB)
Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute
1996-01-01
Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).
A comparison of three time-domain anomaly detection methods
International Nuclear Information System (INIS)
Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.
1996-01-01
Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)
Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna
2017-07-01
Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.
DEFF Research Database (Denmark)
Shyroki, Dzmitry; Lavrinenko, Andrei
2007-01-01
A complex-coordinate method known under the guise of the perfectly matched layer (PML) method for treating unbounded domains in computational electrodynamics is related to similar techniques in fluid dynamics and classical quantum theory. It may also find use in electronic-structure finite......-difference simulations. Straightforward transfer of the PML formulation to other fields does not seem feasible, however, since it is a unique feature of electrodynamics - the natural invariance - that allows analytic trick of complex coordinate scaling to be represented as pure modification of local material parameters...
International Nuclear Information System (INIS)
Yang Jia; Ge Liangquan; Xiong Shengqing
2010-01-01
From the features of spectra shape of Chang'e-1 γ-ray spectrometer(CE1-GRS) data, it is difficult to determine elemental compositions on the lunar surface. Aimed at this problem, this paper proposes using noise adjusted singular value decomposition (NASVD) method to extract orthogonal spectral components from CE1-GRS data. Then the peak signals in the spectra of lower-order layers corresponding to the observed spectrum of each lunar region are respectively analyzed. Elemental compositions of each lunar region can be determined based upon whether the energy corresponding to each peak signal equals to the energy corresponding to the characteristic gamma-ray line emissions of specific elements. The result shows that a number of elements such as U, Th, K, Fe, Ti, Si, O, Al, Mg, Ca and Na are qualitatively determined by this method. (authors)
DEFF Research Database (Denmark)
Xu, Shenzhi; Ai, Xiaomeng; Fang, Jiakun
2017-01-01
Photovoltaic (PV) power generation has made considerable developments in recent years. But its intermittent and volatility of its output has seriously affected the security operation of the power system. In order to better understand the PV generation and provide sufficient data support...... for analysis the impacts, a novel generation method for PV power time series combining decomposition technique and Markov chain theory is presented in this paper. It digs important factors from historical data from existing PV plants and then reproduce new data with similar patterns. In detail, the proposed...... method first decomposes the PV power time series into ideal output curve, amplitude parameter series and random fluctuating component three parts. Then generating daily ideal output curve by the extraction of typical daily data, amplitude parameter series based on the Markov chain Monte Carlo (MCMC...
Sun, Qi; Fu, Shujun
2017-09-20
Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.
Formal Methods and Safety Certification: Challenges in the Railways Domain
DEFF Research Database (Denmark)
Fantechi, Alessandro; Ferrari, Alessio; Gnesi, Stefania
2016-01-01
The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment.......The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment....
International Nuclear Information System (INIS)
Gao Ruorui; Zhang Yue; Yu Wei; Xiong Rui; Shi Jing
2012-01-01
MnFe 2 O 4 nano-particles with an average size of about 7 nm were synthesized by the thermal decomposition method. Based on the magnetic hysteresis loops measured at different temperatures the temperature-dependent saturation magnetization (M S ) and coercivity (H C ) are determined. It is shown that above 20 K the temperature-dependence of the M S and H C indicates the magnetic behaviors in the single-domain nano-particles, while below 20 K, the change of the M S and H C indicates the freezing of the spin-glass like state on the surfaces. By measuring the magnetization–temperature (M–T) curves under the zero-field-cooling (ZFC) and field-cooling procedures at different applied fields, superparamagnetism behavior is also studied. Even though in the ZFC M–T curves peaks can be observed below 160 K, superparamagnetism does not appear until the temperature goes above 300 K, which is related with the strong inter-particle interaction. - Highlights: ► MnFe 2 O 4 nano-particles with size of 7 nm were prepared. ► The surface spin-glass like state is frozen below 20 K. ► The peaks in ZFC magnetization–temperature curves are observed below 160 K. ► The inter-particle interaction inhibits the superparamagnetism at room temperature.
Directory of Open Access Journals (Sweden)
A. Becker
2007-06-01
Full Text Available In this paper a hybrid method combining the Time-Domain Method of Moments (TD-MoM, the Time-Domain Uniform Theory of Diffraction (TD-UTD and the Finite-Difference Time-Domain Method (FDTD is presented. When applying this new hybrid method, thin-wire antennas are modeled with the TD-MoM, inhomogeneous bodies are modelled with the FDTD and large perfectly conducting plates are modelled with the TD-UTD. All inhomogeneous bodies are enclosed in a so-called FDTD-volume and the thin-wire antennas can be embedded into this volume or can lie outside. The latter avoids the simulation of white space between antennas and inhomogeneous bodies. If the antennas are positioned into the FDTD-volume, their discretization does not need to agree with the grid of the FDTD. By using the TD-UTD large perfectly conducting plates can be considered efficiently in the solution-procedure. Thus this hybrid method allows time-domain simulations of problems including very different classes of objects, applying the respective most appropriate numerical techniques to every object.
Theoretical Methods of Domain Structures in Ultrathin Ferroelectric Films: A Review
Directory of Open Access Journals (Sweden)
Jianyi Liu
2014-09-01
Full Text Available This review covers methods and recent developments of the theoretical study of domain structures in ultrathin ferroelectric films. The review begins with an introduction to some basic concepts and theories (e.g., polarization and its modern theory, ferroelectric phase transition, domain formation, and finite size effects, etc. that are relevant to the study of domain structures in ultrathin ferroelectric films. Basic techniques and recent progress of a variety of important approaches for domain structure simulation, including first-principles calculation, molecular dynamics, Monte Carlo simulation, effective Hamiltonian approach and phase field modeling, as well as multiscale simulation are then elaborated. For each approach, its important features and relative merits over other approaches for modeling domain structures in ultrathin ferroelectric films are discussed. Finally, we review recent theoretical studies on some important issues of domain structures in ultrathin ferroelectric films, with an emphasis on the effects of interfacial electrostatics, boundary conditions and external loads.
Cunha, T.; Mendes, M.; Ferreira da Silva, F.; Eden, S.; García, G.; Bacchus-Montabonel, M.-C.; Limão-Vieira, P.
2018-04-01
We report on a combined experimental and theoretical study of electron-transfer-induced decomposition of adenine (Ad) and a selection of analog molecules in collisions with potassium (K) atoms. Time-of-flight negative ion mass spectra have been obtained in a wide collision energy range (6-68 eV in the centre-of-mass frame), providing a comprehensive investigation of the fragmentation patterns of purine (Pu), adenine (Ad), 9-methyl adenine (9-mAd), 6-dimethyl adenine (6-dimAd), and 2-D adenine (2-DAd). Following our recent communication about selective hydrogen loss from the transient negative ions (TNIs) produced in these collisions [T. Cunha et al., J. Chem. Phys. 148, 021101 (2018)], this work focuses on the production of smaller fragment anions. In the low-energy part of the present range, several dissociation channels that are accessible in free electron attachment experiments are absent from the present mass spectra, notably NH2 loss from adenine and 9-methyl adenine. This can be understood in terms of a relatively long transit time of the K+ cation in the vicinity of the TNI tending to enhance the likelihood of intramolecular electron transfer. In this case, the excess energy can be redistributed through the available degrees of freedom inhibiting fragmentation pathways. Ab initio theoretical calculations were performed for 9-methyl adenine (9-mAd) and adenine (Ad) in the presence of a potassium atom and provided a strong basis for the assignment of the lowest unoccupied molecular orbitals accessed in the collision process.
Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun
2017-06-01
Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.
International Nuclear Information System (INIS)
Jang, In Seok
2010-02-01
Evolution of work demands has changed industrial evolution to computerization which makes systems complex and complicated: this field is called Complex Socio-Technical Systems. As communication failure is one problem of Complex Socio-Technical Systems, it has been discovered that communication failure is the reason for many incidents and accidents in various industries, including the nuclear, aerospace and railway industries. Despite the fact that there have been many studies on the severity of communication failure, there is no evaluation method for operators' communication quality in NPPs. Therefore, the objectives of this study are to develop an evaluation method for the quality of NPP Main Control Room (MCR) operators' communication and to apply the proposed method to operators in a full-scope simulator. In order to develop the proposed method, the Work Domain Analysis (WDA) method is introduced. Several characteristic of WDA, such as Abstraction Decomposition Space (ADS) and the diagonal of ADS are the key points in developing an evaluation method for the quality of NPP MCR operators' communication. In addition, in order to apply the proposed method, nine teams working in NPPs participated in the field simulation. Evaluation results reveal that operators' communication quality was higher as larger portion of components in the developed evaluation criteria were mentioned. Therefore, the proposed method could be a useful one for evaluating the communication quality in any complex system. In order to verify that the proposed method is meaningful to evaluate communication quality, the evaluation results were further investigated with objective performance measures. Further investigation of the evaluation results also supports the idea that the proposed method can be used in evaluating communication quality
DomPep--a general method for predicting modular domain-mediated protein-protein interactions.
Directory of Open Access Journals (Sweden)
Lei Li
Full Text Available Protein-protein interactions (PPIs are frequently mediated by the binding of a modular domain in one protein to a short, linear peptide motif in its partner. The advent of proteomic methods such as peptide and protein arrays has led to the accumulation of a wealth of interaction data for modular interaction domains. Although several computational programs have been developed to predict modular domain-mediated PPI events, they are often restricted to a given domain type. We describe DomPep, a method that can potentially be used to predict PPIs mediated by any modular domains. DomPep combines proteomic data with sequence information to achieve high accuracy and high coverage in PPI prediction. Proteomic binding data were employed to determine a simple yet novel parameter Ligand-Binding Similarity which, in turn, is used to calibrate Domain Sequence Identity and Position-Weighted-Matrix distance, two parameters that are used in constructing prediction models. Moreover, DomPep can be used to predict PPIs for both domains with experimental binding data and those without. Using the PDZ and SH2 domain families as test cases, we show that DomPep can predict PPIs with accuracies superior to existing methods. To evaluate DomPep as a discovery tool, we deployed DomPep to identify interactions mediated by three human PDZ domains. Subsequent in-solution binding assays validated the high accuracy of DomPep in predicting authentic PPIs at the proteome scale. Because DomPep makes use of only interaction data and the primary sequence of a domain, it can be readily expanded to include other types of modular domains.
Design of potentially active ligands for SH2 domains by molecular modeling methods
Directory of Open Access Journals (Sweden)
Hurmach V. V.
2014-07-01
Full Text Available Search for new chemical structures possessing specific biological activity is a complex problem that needs the use of the latest achievements of molecular modeling technologies. It is well known that SH2 domains play a major role in ontogenesis as intermediaries of specific protein-protein interactions. Aim. Developing an algorithm to investigate the properties of SH2 domain binding, search for new potential active compounds for the whole SH2 domains class. Methods. In this paper, we utilize a complex of computer modeling methods to create a generic set of potentially active compounds targeting universally at the whole class of SH2 domains. A cluster analysis of all available three-dimensional structures of SH2 domains was performed and general pharmacophore models were formulated. The models were used for virtual screening of collection of drug-like compounds provided by Enamine Ltd. Results. The design technique for library of potentially active compounds for SH2 domains class was proposed. Conclusions. The original algorithm of SH2 domains research with molecular docking method was developed. Using our algorithm, the active compounds for SH2 domains were found.
Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang
2018-01-01
In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.
Li, Ping; Shi, Yifei; Jiang, Lijun; Bagci, Hakan
2014-01-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer's shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Li, Ping
2014-05-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens\\' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer\\'s shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Directory of Open Access Journals (Sweden)
Batakliev Todor
2014-06-01
Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates
Li, Ping
2014-07-01
This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.
Numerical simulation of electromagnetic wave propagation using time domain meshless method
International Nuclear Information System (INIS)
Ikuno, Soichiro; Fujita, Yoshihisa; Itoh, Taku; Nakata, Susumu; Nakamura, Hiroaki; Kamitani, Atsushi
2012-01-01
The electromagnetic wave propagation in various shaped wave guide is simulated by using meshless time domain method (MTDM). Generally, Finite Differential Time Domain (FDTD) method is applied for electromagnetic wave propagation simulation. However, the numerical domain should be divided into rectangle meshes if FDTD method is applied for the simulation. On the other hand, the node disposition of MTDM can easily describe the structure of arbitrary shaped wave guide. This is the large advantage of the meshless time domain method. The results of computations show that the damping rate is stably calculated in case with R < 0.03, where R denotes a support radius of the weight function for the shape function. And the results indicate that the support radius R of the weight functions should be selected small, and monomials must be used for calculating the shape functions. (author)
Tomar, S.K.
2002-01-01
It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.
Time-domain Green's Function Method for three-dimensional nonlinear subsonic flows
Tseng, K.; Morino, L.
1978-01-01
The Green's Function Method for linearized 3D unsteady potential flow (embedded in the computer code SOUSSA P) is extended to include the time-domain analysis as well as the nonlinear term retained in the transonic small disturbance equation. The differential-delay equations in time, as obtained by applying the Green's Function Method (in a generalized sense) and the finite-element technique to the transonic equation, are solved directly in the time domain. Comparisons are made with both linearized frequency-domain calculations and existing nonlinear results.
Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics
Gedney, Stephen
2011-01-01
Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics provides a comprehensive tutorial of the most widely used method for solving Maxwell's equations -- the Finite Difference Time-Domain Method. This book is an essential guide for students, researchers, and professional engineers who want to gain a fundamental knowledge of the FDTD method. It can accompany an undergraduate or entry-level graduate course or be used for self-study. The book provides all the background required to either research or apply the FDTD method for the solution of Maxwell's equations to p
Sweeney, William; Lee, James; Abid, Nauman; DeMeo, Stephen
2014-01-01
An experiment is described that determines the activation energy (E[subscript a]) of the iodide-catalyzed decomposition reaction of hydrogen peroxide in a much more efficient manner than previously reported in the literature. Hydrogen peroxide, spontaneously or with a catalyst, decomposes to oxygen and water. Because the decomposition reaction is…
High-capacity method for hiding data in the discrete cosine transform domain
Qazanfari, Kazem; Safabakhsh, Reza
2013-10-01
Steganography is the art and science of hiding data in different media such as texts, audios, images, and videos. Data hiding techniques are generally divided into two groups: spatial and frequency domain techniques. Spatial domain methods generally have low security and, as a result, are less attractive to researchers. Discrete cosine transform (DCT) is the most common transform domain used in steganography and JPEG compression. Since a large number of the DCT coefficients of JPEG images are zero, the capacity of DCT domain-based steganography methods is not very high. We present a high-capacity method for hiding messages in the DCT domain. We describe the method in two classes where the receiver has and where the receiver does not have the cover image. In each class, we consider three cases for each coefficient. By considering n coefficients, there are 3n different situations. The method embeds ⌊log2 3n⌋ bits in these n coefficients. We show that the maximum reachable capacity by our method is 58% higher than the other general steganography methods. Experimental results show that the histogram-based steganalysis methods cannot detect the stego images produced by the proposed method while the capacity is increased significantly.
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
Solving the Schroedinger equation using the finite difference time domain method
International Nuclear Information System (INIS)
Sudiarta, I Wayan; Geldart, D J Wallace
2007-01-01
In this paper, we solve the Schroedinger equation using the finite difference time domain (FDTD) method to determine energies and eigenfunctions. In order to apply the FDTD method, the Schroedinger equation is first transformed into a diffusion equation by the imaginary time transformation. The resulting time-domain diffusion equation is then solved numerically by the FDTD method. The theory and an algorithm are provided for the procedure. Numerical results are given for illustrative examples in one, two and three dimensions. It is shown that the FDTD method accurately determines eigenfunctions and energies of these systems
Wulan, Praswasti P. D. K.; Silaen, Toni Partogi Johannes
2017-05-01
Camphor is a renewable carbon source that can be used as raw material for synthesizing Carbon Nanotube (CNT). Camphor is a substance that can be found on the Cinnamomum camphora tree. In this research, the method used to synthesize Aligned Carbon Nanotube (ACNT) from camphor is Floating Catalyst Chemical Vapor Deposition (FC-CVD) with Ferrocene as catalyst at temperature of 800°C, hydrogen gas as the co-reactant and argon gas as carrier gas. This method is the most popular method of synthesizing ACNT which oriented and have a high density. Camphor decomposes into benzene, toluene, and xylene at a temperature of 800°C. By using GC-FID for characterization test, the results showed decomposition at a temperature of 800°C camphor dominated by benzene with a concentration of 92.422 to 97.656%. The research was conducted by varying the flow rate of carrier gas such as argon at 40, 55, 70, 85 and 100 mL / min at a temperature of 800°C for 60 minutes of reaction time. Argon carrier gas flow rate of 70 mL / min producing CNT with the highest yield, but this is not followed by best quality of CNT. CNT with best quality is obtained at a flow rate of argon carrier gas at 55 mL / min based on test results characterization by using SEM, EDX, Mapping, and RAMAN Spectroscopy. This research have not obtained CNT with aligned structured.
Directory of Open Access Journals (Sweden)
Sangyeong Jeong
2017-10-01
Full Text Available This paper proposes an experimental optimization method for a wireless power transfer (WPT system. The power transfer characteristics of a WPT system with arbitrary loads and various types of coupling and compensation networks can be extracted by frequency domain measurements. The various performance parameters of the WPT system, such as input real/imaginary/apparent power, power factor, efficiency, output power and voltage gain, can be accurately extracted in a frequency domain by a single passive measurement. Subsequently, the design parameters can be efficiently tuned by separating the overall design steps into two parts. The extracted performance parameters of the WPT system were validated with time-domain experiments.
Energy Technology Data Exchange (ETDEWEB)
Lan, Yuanfei; Li, Xiaoyu; Li, Guoping; Luo, Yunjun, E-mail: yjluo@bit.edu.cn [Beijing Institute of Technology, School of Materials Science and Engineering (China)
2015-10-15
Graphene/Fe{sub 2}O{sub 3} (Gr/Fe{sub 2}O{sub 3}) aerogel was synthesized by a simple sol–gel method and supercritical carbon dioxide drying technique. In this study, the morphology and structure were characterized by scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy, X-ray diffraction, and nitrogen sorption tests. The catalytic performance of the as-synthesized Gr/Fe{sub 2}O{sub 3} aerogel on the thermal decomposition of ammonium perchlorate (AP) was investigated by thermogravimetric and differential scanning calorimeter. The experimental results showed that Fe{sub 2}O{sub 3} with particle sizes in the nanometer range was anchored on the Gr sheets and Gr/Fe{sub 2}O{sub 3} aerogel exhibits promising catalytic effects for the thermal decomposition of AP. The decomposition temperature of AP was obviously decreased and the total heat release increased as well.
Accessible methods for the dynamic time-scale decomposition of biochemical systems.
Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula
2009-11-01
The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.
Kuks, P. F.; Weekers, L. E.; Goldhoorn, P. B.
1990-01-01
A rapid high-resolution high pressure liquid chromatographic method was developed for assaying pilocarpine. Pilocarpine in ophthalmic solutions decomposes fairly rapidly to give isopilocarpine, pilocarpic acid and isopilocarpic acid. The quality of an ophthalmic solution can be assessed by assaying
A new physics-based method for detecting weak nuclear signals via spectral decomposition
International Nuclear Information System (INIS)
Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei
2012-01-01
We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.
Generalized multiscale finite element methods for problems in perforated heterogeneous domains
Chung, Eric T.
2015-06-08
Complex processes in perforated domains occur in many real-world applications. These problems are typically characterized by physical processes in domains with multiple scales. Moreover, these problems are intrinsically multiscale and their discretizations can yield very large linear or nonlinear systems. In this paper, we investigate multiscale approaches that attempt to solve such problems on a coarse grid by constructing multiscale basis functions in each coarse grid, where the coarse grid can contain many perforations. In particular, we are interested in cases when there is no scale separation and the perforations can have different sizes. In this regard, we mention some earlier pioneering works, where the authors develop multiscale finite element methods. In our paper, we follow Generalized Multiscale Finite Element Method (GMsFEM) and develop a multiscale procedure where we identify multiscale basis functions in each coarse block using snapshot space and local spectral problems. We show that with a few basis functions in each coarse block, one can approximate the solution, where each coarse block can contain many small inclusions. We apply our general concept to (1) Laplace equation in perforated domains; (2) elasticity equation in perforated domains; and (3) Stokes equations in perforated domains. Numerical results are presented for these problems using two types of heterogeneous perforated domains. The analysis of the proposed methods will be presented elsewhere. © 2015 Taylor & Francis
Directory of Open Access Journals (Sweden)
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Fernandez, J., E-mail: javier.fernandez@cimav.edu.mx [Centro de Investigacion en Materiales Avanzados, Av. Miguel de Cervantes 120, Complejo Industrial C.P. 31109, Chihuahua, Chih. (Mexico); Instituto Mexicano del Petroleo, Direccion de Investigacion y Posgrado, Eje Central Lazaro Cardenas 152, C.P. 07730, D.F. (Mexico); Zanella, R. [Centro de Ciencias Aplicadas y Desarrollo Tecnologico, UNAM, Circuito exterior S/N, Ciudad Universitaria, C.P. 04510, A.P. 70-186, Delegacion Coyoacan, D.F. (Mexico); Aguilar-Elguezabal, A. [Centro de Investigacion en Materiales Avanzados, Av. Miguel de Cervantes 120, Complejo Industrial C.P. 31109, Chihuahua, Chih. (Mexico); Arizabalo, R.D.; Castillo, S.; Moran-Pineda, M. [Instituto Mexicano del Petroleo, Direccion de Investigacion y Posgrado, Eje Central Lazaro Cardenas 152, C.P. 07730, D.F. (Mexico)
2010-10-25
In the present work, the synthesis, characterization and photoactivity concerning the nitrogen monoxide (NO) decomposition of sol-gel Au/TiO{sub 2} photocatalysts are reported. TiO{sub 2} was prepared by gelling titanium (IV) isopropoxide, and gold nanoparticles were added by the deposition-precipitation method with urea. The catalysts with different gold concentrations were characterized by the following techniques: BET, XRD, UV-vis and dark-field TEM. It was found that by using this synthesis method, a high dispersion of gold nanoparticles on TiO{sub 2} was reached (4.4-6.7 nm), and the obtained structure lead to a band gap energy that is lower than the one observed for undoped TiO{sub 2}. A NO + O{sub 2} mixture (150 ppm) was used to evaluate the photocatalytic activity in situ, at room temperature, under atmospheric pressure and a UV lamp was used as radiation source. The photocatalytic conversion of nitrogen monoxide (NO) was followed by FTIR, which reached 96% in 60 min. The Au/TiO{sub 2} materials showed an enhanced photocatalytic activity when compared with the reference TiO{sub 2}.
An object-oriented decomposition of the adaptive-hp finite element method
Energy Technology Data Exchange (ETDEWEB)
Wiley, J.C.
1994-12-13
Adaptive-hp methods are those which use a refinement control strategy driven by a local error estimate to locally modify the element size, h, and polynomial order, p. The result is an unstructured mesh in which each node may be associated with a different polynomial order and which generally require complex data structures to implement. Object-oriented design strategies and languages which support them, e.g., C++, help control the complexity of these methods. Here an overview of the major classes and class structure of an adaptive-hp finite element code is described. The essential finite element structure is described in terms of four areas of computation each with its own dynamic characteristics. Implications of converting the code for a distributed-memory parallel environment are also discussed.
Directory of Open Access Journals (Sweden)
Yesylevskyy S. O.
2010-04-01
Full Text Available Aim. Despite a large number of existing domain identification techniques there is no universally accepted method, which identifies the hierarchy of dynamic domains using the data of molecular dynamics (MD simulations. The goal of this work is to develop such technique. Methods. The dynamic domains are identified by eliminating systematic motions from MD trajectories recursively in a model-free manner. Results. The technique called the Hierarchical Domain-Wise Alignment (HDWA to identify hierarchically organized dynamic domains in proteins using the MD trajectories has been developed. Conclusion. A new method of domain identification in proteins is proposed
Directory of Open Access Journals (Sweden)
Guimarães Katia S
2006-04-01
Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.
OpenPSTD : The open source implementation of the pseudospectral time-domain method
Krijnen, T.; Hornikx, M.C.J.; Borkowski, B.
2014-01-01
An open source implementation of the pseudospectral time-domain method for the propagation of sound is presented, which is geared towards applications in the built environment. Being a wavebased method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory
OpenPSTD : The open source pseudospectral time-domain method for acoustic propagation
Hornikx, M.C.J.; Krijnen, T.F.; van Harten, L.
2016-01-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in
Energy Technology Data Exchange (ETDEWEB)
Gao Ruorui [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Zhang Yue, E-mail: yue-zhang@mail.hust.edu.cn [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Department of Electric Science and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); Yu Wei [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Xiong Rui [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Key Laboratory for the Green Preparation and Application of Functional Materials of Ministry of Education, Hubei University, Wuhan 430062 (China); Shi Jing [Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Key Laboratory for the Green Preparation and Application of Functional Materials of Ministry of Education, Hubei University, Wuhan 430062 (China); International Center for Material Physics, Shen Yang 110015 (China)
2012-08-15
MnFe{sub 2}O{sub 4} nano-particles with an average size of about 7 nm were synthesized by the thermal decomposition method. Based on the magnetic hysteresis loops measured at different temperatures the temperature-dependent saturation magnetization (M{sub S}) and coercivity (H{sub C}) are determined. It is shown that above 20 K the temperature-dependence of the M{sub S} and H{sub C} indicates the magnetic behaviors in the single-domain nano-particles, while below 20 K, the change of the M{sub S} and H{sub C} indicates the freezing of the spin-glass like state on the surfaces. By measuring the magnetization-temperature (M-T) curves under the zero-field-cooling (ZFC) and field-cooling procedures at different applied fields, superparamagnetism behavior is also studied. Even though in the ZFC M-T curves peaks can be observed below 160 K, superparamagnetism does not appear until the temperature goes above 300 K, which is related with the strong inter-particle interaction. - Highlights: Black-Right-Pointing-Pointer MnFe{sub 2}O{sub 4} nano-particles with size of 7 nm were prepared. Black-Right-Pointing-Pointer The surface spin-glass like state is frozen below 20 K. Black-Right-Pointing-Pointer The peaks in ZFC magnetization-temperature curves are observed below 160 K. Black-Right-Pointing-Pointer The inter-particle interaction inhibits the superparamagnetism at room temperature.
Directory of Open Access Journals (Sweden)
Kyunam Kim
2017-10-01
Full Text Available Recently, several studies using various methods for analysis have tried to evaluate factors affecting knowledge creation activity, but few analyses quantitatively account for the impact that economic determinants have on them. This paper introduces a non-parametric method to structurally analyze changes in information and communication technology (ICT patenting trends as representative outcomes of knowledge creation activity with economic indicators. For this, the authors established a symmetric model that enables several economic contributors to be decomposed through the perspective of ICTs’ research and development (R&D performance, industrial change, and overall manufacturing growth. Additionally, an empirical analysis of some countries from 2001 to 2009 was conducted through this model. This paper found that all countries except the United States experienced an increase of 10.5–267.4% in ICT patent applications, despite fluctuations in the time series. It is interesting that the changes in ICT patenting of each country generally have a negative relationship with the intensity of each country’s patent protection system. Positive determinants include ICT R&D productivity and overall manufacturing growth, while ICT industrial change is a negative determinant in almost all countries. This paper emphasizes that each country needs to design strategic plans for effective ICT innovation. In particular, ICT innovation activities need to be promoted by increasing ICT R&D investment and developing the ICT industry, since ICT R&D intensity and ICT industrial change generally have a low contribution to ICT patenting.
Perfectly Matched Layer for the Wave Equation Finite Difference Time Domain Method
Miyazaki, Yutaka; Tsuchiya, Takao
2012-07-01
The perfectly matched layer (PML) is introduced into the wave equation finite difference time domain (WE-FDTD) method. The WE-FDTD method is a finite difference method in which the wave equation is directly discretized on the basis of the central differences. The required memory of the WE-FDTD method is less than that of the standard FDTD method because no particle velocity is stored in the memory. In this study, the WE-FDTD method is first combined with the standard FDTD method. Then, Berenger's PML is combined with the WE-FDTD method. Some numerical demonstrations are given for the two- and three-dimensional sound fields.
Directory of Open Access Journals (Sweden)
Tsugio Fukuchi
2014-06-01
Full Text Available The finite difference method (FDM based on Cartesian coordinate systems can be applied to numerical analyses over any complex domain. A complex domain is usually taken to mean that the geometry of an immersed body in a fluid is complex; here, it means simply an analytical domain of arbitrary configuration. In such an approach, we do not need to treat the outer and inner boundaries differently in numerical calculations; both are treated in the same way. Using a method that adopts algebraic polynomial interpolations in the calculation around near-wall elements, all the calculations over irregular domains reduce to those over regular domains. Discretization of the space differential in the FDM is usually derived using the Taylor series expansion; however, if we use the polynomial interpolation systematically, exceptional advantages are gained in deriving high-order differences. In using the polynomial interpolations, we can numerically solve the Poisson equation freely over any complex domain. Only a particular type of partial differential equation, Poisson's equations, is treated; however, the arguments put forward have wider generality in numerical calculations using the FDM.
Fabrication of porous aluminium with directional pores through thermal decomposition method
International Nuclear Information System (INIS)
Nakajima, H; Kim, S Y; Park, J S
2009-01-01
Lotus-type porous metals were fabricated by unidirectional solidification in pressurized gas atmosphere. The elongated pres are evolved by insoluble gas resulted from the solubility gap between liquid and solid when the melt is solidified. Recently we developed a novel fabrication technique, in which gas compounds are used as a source of dissolving gas instead of the high pressure. In the present work this gas compound method was applied to fabrication of lotus aluminium. Hydrogen decomposed from calcium hydroxide, sodium bicarbonate and titanium hydride evolves cylindrical pores in aluminium. The porosity is about 20%. The pore size decreases and the pore number density increases with increasing amount of calcium hydroxide, which is explained by increase in pore nucleation sites.
A New Method for Determining Structure Ensemble: Application to a RNA Binding Di-Domain Protein.
Liu, Wei; Zhang, Jingfeng; Fan, Jing-Song; Tria, Giancarlo; Grüber, Gerhard; Yang, Daiwen
2016-05-10
Structure ensemble determination is the basis of understanding the structure-function relationship of a multidomain protein with weak domain-domain interactions. Paramagnetic relaxation enhancement has been proven a powerful tool in the study of structure ensembles, but there exist a number of challenges such as spin-label flexibility, domain dynamics, and overfitting. Here we propose a new (to our knowledge) method to describe structure ensembles using a minimal number of conformers. In this method, individual domains are considered rigid; the position of each spin-label conformer and the structure of each protein conformer are defined by three and six orthogonal parameters, respectively. First, the spin-label ensemble is determined by optimizing the positions and populations of spin-label conformers against intradomain paramagnetic relaxation enhancements with a genetic algorithm. Subsequently, the protein structure ensemble is optimized using a more efficient genetic algorithm-based approach and an overfitting indicator, both of which were established in this work. The method was validated using a reference ensemble with a set of conformers whose populations and structures are known. This method was also applied to study the structure ensemble of the tandem di-domain of a poly (U) binding protein. The determined ensemble was supported by small-angle x-ray scattering and nuclear magnetic resonance relaxation data. The ensemble obtained suggests an induced fit mechanism for recognition of target RNA by the protein. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Hesthaven, Jan
1997-01-01
This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...
Directory of Open Access Journals (Sweden)
Hongtao Gao
2012-01-01
Full Text Available N, Cd-codoped TiO2 have been synthesized by thermal decomposition method. The products were characterized by X-ray diffraction (XRD, scanning electron microscope (SEM, UV-visible diffuse reflectance spectra (DRS, X-ray photoelectron spectroscopy (XPS, and Brunauer-Emmett-Teller (BET specific surface area analysis, respectively. The products represented good performance in photocatalytic degradation of methyl orange. The effect of the incorporation of N and Cd on electronic structure and optical properties of TiO2 was studied by first-principle calculations on the basis of density functional theory (DFT. The impurity states, introduced by N 2p or Cd 5d, lied between the valence band and the conduction band. Due to dopants, the band gap of N, Cd-codoped TiO2 became narrow. The electronic transition from the valence band to conduction band became easy, which could account for the observed photocatalytic performance of N, Cd-codoped TiO2. The theoretical analysis might provide a probable reference for the experimentally element-doped TiO2 synthesis.
International Nuclear Information System (INIS)
Toraya, H.; Tusaka, S.
1995-01-01
A new procedure for quantitative phase analysis using the whole-powder-pattern decomposition method is proposed. The procedure consists of two steps. In the first, the whole powder patterns of single-component materials are decomposed separately. The refined parameters of integrated intensity, unit cell and profile shape for respective phases are stored in computer data files. In the second step, the whole powder pattern of a mixture sample is fitted, where the parameters refined in the previous step are used to calculate the profile intensity. The integrated intensity parameters are, however, not varied during the least-squares fitting, while the scale factors for the profile intensities of individual phases are adjusted instead. Weight fractions are obtained by solving simultaneous equations, coefficients of which include the scale factors and the mass-absorption coefficients calculated from chemical formulas of respective phases. The procedure can be applied to all mixture samples, including those containing an amorphous material, if single-component samples with known chemical compositions and their approximate unit-cell parameters are provided. The procedure has been tested by using two-to five-component samples, giving average deviations of 1 to 1.5%. Optimum refinement conditions are discussed in connection with the accuracy of the procedure. (orig.)
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
Directory of Open Access Journals (Sweden)
Can Chen
2013-01-01
Full Text Available Straw retention has been shown to reduce carbon dioxide (CO2 emission from agricultural soils. But it remains a big challenge for models to effectively predict CO2 emission fluxes under different straw retention methods. We used maize season data in the Griffith region, Australia, to test whether the denitrification-decomposition (DNDC model could simulate annual CO2 emission. We also identified driving factors of CO2 emission by correlation analysis and path analysis. We show that the DNDC model was able to simulate CO2 emission under alternative straw retention scenarios. The correlation coefficients between simulated and observed daily values for treatments of straw burn and straw incorporation were 0.74 and 0.82, respectively, in the straw retention period and 0.72 and 0.83, respectively, in the crop growth period. The results also show that simulated values of annual CO2 emission for straw burn and straw incorporation were 3.45 t C ha−1 y−1 and 2.13 t C ha−1 y−1, respectively. In addition the DNDC model was found to be more suitable in simulating CO2 mission fluxes under straw incorporation. Finally the standard multiple regression describing the relationship between CO2 emissions and factors found that soil mean temperature (SMT, daily mean temperature (Tmean, and water-filled pore space (WFPS were significant.
Talagala, P.; Marko, X.; Padmanabhan, K. R.; Naik, R.; Rodak, D.; Cheng, Y. T.
2006-03-01
We have synthesized pure and transition element (Fe, Co and V) doped Titanium oxide thin films of thickness ˜ 350 nm on sapphire, Si, and stainless steel substrates by Metalorganic Decomposition (MOD) method. The films were subsequently annealed at appropriate temperatures ( 500-750C) to obtain either anatase or the rutile phase of TiO2. Analysis of the composition of the films were performed by energy dispersive X-ray(EDAX) and Rutherford backscattering spectrometry(RBS). Ion channeling was used to identify possible epitaxial growth of the films on sapphire. Both XRD and Raman spectra of the films exhibit that the films annealed at 550C are of anatase phase, while those annealed at 700C seem to prefer a rutile structure. The water contact angle measurements of the films before and after photoactivation, demonstrate a significant reduction in the contact angle for the anatase phase. However, the variation in contact angle was observed for films exposed to UV (<10^o-30^o) and dark (25^o-50^o). Films doped with Fe show a trend towards lower contact angle than those doped with Co. Results with films doped with V will also be included.
A time domain inverse dynamic method for the end point tracking control of a flexible manipulator
Kwon, Dong-Soo; Book, Wayne J.
1991-01-01
The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.
Texture of lipid bilayer domains
DEFF Research Database (Denmark)
Jensen, Uffe Bernchou; Brewer, Jonathan R.; Midtiby, Henrik Skov
2009-01-01
We investigate the texture of gel (g) domains in binary lipid membranes composed of the phospholipids DPPC and DOPC. Lateral organization of lipid bilayer membranes is a topic of fundamental and biological importance. Whereas questions related to size and composition of fluid membrane domain...... are well studied, the possibility of texture in gel domains has so far not been examined. When using polarized light for two-photon excitation of the fluorescent lipid probe Laurdan, the emission intensity is highly sensitive to the angle between the polarization and the tilt orientation of lipid acyl...... chains. By imaging the intensity variations as a function of the polarization angle, we map the lateral variations of the lipid tilt within domains. Results reveal that gel domains are composed of subdomains with different lipid tilt directions. We have applied a Fourier decomposition method...
Mainali, Laxman; Camenisch, Theodore G; Hyde, James S; Subczynski, Witold K
2017-12-01
The presence of integral membrane proteins induces the formation of distinct domains in the lipid bilayer portion of biological membranes. Qualitative application of both continuous wave (CW) and saturation recovery (SR) electron paramagnetic resonance (EPR) spin-labeling methods allowed discrimination of the bulk, boundary, and trapped lipid domains. A recently developed method, which is based on the CW EPR spectra of phospholipid (PL) and cholesterol (Chol) analog spin labels, allows evaluation of the relative amount of PLs (% of total PLs) in the boundary plus trapped lipid domain and the relative amount of Chol (% of total Chol) in the trapped lipid domain [ M. Raguz, L. Mainali, W. J. O'Brien, and W. K. Subczynski (2015), Exp. Eye Res., 140:179-186 ]. Here, a new method is presented that, based on SR EPR spin-labeling, allows quantitative evaluation of the relative amounts of PLs and Chol in the trapped lipid domain of intact membranes. This new method complements the existing one, allowing acquisition of more detailed information about the distribution of lipids between domains in intact membranes. The methodological transition of the SR EPR spin-labeling approach from qualitative to quantitative is demonstrated. The abilities of this method are illustrated for intact cortical and nuclear fiber cell plasma membranes from porcine eye lenses. Statistical analysis (Student's t -test) of the data allowed determination of the separations of mean values above which differences can be treated as statistically significant ( P ≤ 0.05) and can be attributed to sources other than preparation/technique.
Application of Pareto optimization method for ontology matching in nuclear reactor domain
International Nuclear Information System (INIS)
Meenachi, N. Madurai; Baba, M. Sai
2017-01-01
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
A three-dimensional polarization domain retrieval method from electron diffraction data
International Nuclear Information System (INIS)
Pennington, Robert S.; Koch, Christoph T.
2015-01-01
We present an algorithm for retrieving three-dimensional domains of picometer-scale shifts in atomic positions from electron diffraction data, and apply it to simulations of ferroelectric polarization in BaTiO 3 . Our algorithm successfully and correctly retrieves polarization domains in which the Ti atom positions differ by less than 3 pm (0.4% of the unit cell diagonal distance) with 5 and 10 nm depth resolution along the beam direction, and we also retrieve unit cell strain, corresponding to tetragonal-to-cubic unit cell distortions, for 10 nm domains. Experimental applicability is also discussed. - Highlights: • We show a retrieval method for ferroelectric polarization from TEM diffraction data. • Simulated strain and polarization variations along the beam direction are retrieved. • This method can be used for 3D strain and polarization mapping without specimen tilt
Stable multi-domain spectral penalty methods for fractional partial differential equations
Xu, Qinwu; Hesthaven, Jan S.
2014-01-01
We propose stable multi-domain spectral penalty methods suitable for solving fractional partial differential equations with fractional derivatives of any order. First, a high order discretization is proposed to approximate fractional derivatives of any order on any given grids based on orthogonal polynomials. The approximation order is analyzed and verified through numerical examples. Based on the discrete fractional derivative, we introduce stable multi-domain spectral penalty methods for solving fractional advection and diffusion equations. The equations are discretized in each sub-domain separately and the global schemes are obtained by weakly imposed boundary and interface conditions through a penalty term. Stability of the schemes are analyzed and numerical examples based on both uniform and nonuniform grids are considered to highlight the flexibility and high accuracy of the proposed schemes.
Application of Pareto optimization method for ontology matching in nuclear reactor domain
Energy Technology Data Exchange (ETDEWEB)
Meenachi, N. Madurai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Planning and Human Resource Management Div.; Baba, M. Sai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Resources Management Group
2017-12-15
This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Zhang, Zhendong
2017-07-11
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate update direction leading the inversion to converge what we refer to as local minima of the objective function. In our approach, we consider mild lateral variation in the model, and thus, use a gradient given by the oriented time-domain imaging method. Specifically, we apply the oriented time-domain imaging on the data residual to obtain the geometrical features of the velocity perturbation. After updating the model in the time domain, we convert the perturbation from the time domain to depth using the average velocity. Considering density is constant, we can expand the conventional 1D impedance inversion method to 2D or 3D velocity inversion within the process of full waveform inversion. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reection response. To eliminate the cross-talk artifacts between different parameters, we utilize what we consider being an optimal parametrization for this step. To do so, we extend the prestack time-domain migration image in incident angle dimension to incorporate angular dependence needed by the multiparameter inversion. For simple models, this approach provides an efficient and stable way to do full waveform inversion or modified seismic inversion and makes the anisotropic inversion more practicable. The proposed method still needs kinematically accurate initial models since it only recovers the high-wavenumber part as conventional full waveform inversion method does. Results on synthetic data of isotropic and anisotropic cases illustrate the benefits and limitations of this method.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution
Pagan Munoz, R.; Hornikx, M.C.J.
The wave-based Fourier Pseudospectral time-domain (Fourier-PSTD) method was shown to be an effective way of modeling outdoor acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly
The finite-difference time-domain method for electromagnetics with Matlab simulations
Elsherbeni, Atef Z
2016-01-01
This book introduces the powerful Finite-Difference Time-Domain method to students and interested researchers and readers. An effective introduction is accomplished using a step-by-step process that builds competence and confidence in developing complete working codes for the design and analysis of various antennas and microwave devices.
Open-geometry Fourier modal method: modeling nanophotonic structures in infinite domains
DEFF Research Database (Denmark)
Häyrynen, Teppo; de Lasson, Jakob Rosenkrantz; Gregersen, Niels
2016-01-01
We present an open-geometry Fourier modal method based on a new combination of open boundary conditions and an efficient k-space discretization. The open boundary of the computational domain is obtained using basis functions that expand the whole space, and the integrals subsequently appearing due...
Modeling of Nanophotonic Resonators with the Finite-Difference Frequency-Domain Method
DEFF Research Database (Denmark)
Ivinskaya, Aliaksandra; Lavrinenko, Andrei; Shyroki, Dzmitry
2011-01-01
Finite-difference frequency-domain method with perfectly matched layers and free-space squeezing is applied to model open photonic resonators of arbitrary morphology in three dimensions. Treating each spatial dimension independently, nonuniform mesh of continuously varying density can be built ea...
Nonlinear system identification NARMAX methods in the time, frequency, and spatio-temporal domains
Billings, Stephen A
2013-01-01
Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains describes a comprehensive framework for the identification and analysis of nonlinear dynamic systems in the time, frequency, and spatio-temporal domains. This book is written with an emphasis on making the algorithms accessible so that they can be applied and used in practice. Includes coverage of: The NARMAX (nonlinear autoregressive moving average with exogenous inputs) modelThe orthogonal least squares algorithm that allows models to be built term by
Soil-structure interaction analysis of NPP containments: substructure and frequency domain methods
International Nuclear Information System (INIS)
Venancio-Filho, F.; Almeida, M.C.F.; Ferreira, W.G.; De Barros, F.C.P.
1997-01-01
Substructure and frequency domain methods for soil-structure interaction are addressed in this paper. After a brief description of mathematical models for the soil and of excitation, the equations for dynamic soil-structure interaction are developed for a rigid surface foundation and for an embedded foundation. The equations for the frequency domain analysis of MDOF systems are provided. An example of soil-structure interaction analysis with frequency-dependent soil properties is given and examples of identification of foundation impedance functions and soil properties are presented. (orig.)
Onishi, Yuya; Nakamura, Toshihiro; Adachi, Sadao
2017-02-01
Tb3Al5O12:Ce3+ garnet (TAG:Ce3+) phosphor was synthesized by the metal organic decomposition (MOD) method and subsequent calcination at Tc = 800-1200°C for 1 h in air. The effects of Ce3+ concentration on the phosphor properties were investigated in detail using X-ray diffraction (XRD) analysis, photoluminescence (PL) analysis, PL excitation (PLE) spectroscopy, and PL decay measurements. The maximum intensity in the Ce3+ yellow emission was observed at the Ce3+ concentration of ∼0.20%. PLE and PL decay measurements suggested an evidence of the energy transfer from Tb3+ to Ce3+. Calcination temperature dependence of the XRD and PL intensities yielded an energy of ∼1.5 eV both for the TAG formation in the MOD process and for the optical activation of Ce3+ in its lattice sites. Temperature dependences of the PL intensity for the TAG:Ce3+ yellow-emitting and K2SiF6:Mn4+ red-emitting phosphors were also examined for the future solid-state lighting applications at T = 20-500 K in 10-K steps. The data of TAG:Ce3+ were analyzed using a theoretical model with considering a reservoir level of Et ∼9 meV, yielding a quenching energy of Eq ∼0.35 eV, whereas the K2SiF6:Mn4+ red-emitting phosphor data yielded a value of Eq ∼1.0 eV. The schematic energy-level diagrams for Tb3+ and Ce3+ were proposed for the sake of a better understanding of these ions in the TAG host.
International Nuclear Information System (INIS)
Khorasani, A.; Mousavi Shalmani, M. A.; Piervali Bieranvand, N.
2011-01-01
An accurate, precise, fast and ease as well as the ability for measurements in depth are the characteristics that are desirable in measuring soil moisture methods. To compare methods (time domain reflectometry and capacitance) with neutron scattering for soil water monitoring, an experiment was carried out in a randomized complete block design (Split Split plot) on tomato with three replications on the experimental field of International Atomic Energy Agency (Seibersdorf-Austria). The treatment instruments for the soil moisture monitoring (main factor) consist of neutron gauge, Diviner 2000, time domain reflectometer and an EnviroScan and different irrigation systems (first sub factor) consist of trickle and furrow irrigations and different depths of soil (second sub factor) consist of 0-20, 20-40 and 40-60 cm. The results showed that for the neutron gauge and time domain reflectometer the amount of soil moisture in both of trickle and furrow irrigations were the same, but the significant differences were recorded in Diviner 2000 and EnviroScan measurements. The results of this study showed that the neutron gauge is an acceptable and reliable means with the modern technology, with a precision of ±2 mm in 450 mm soil water to a depth of 1.5 meter and can be considered as the most practical method for measuring soil moisture profiles and irrigation planning program. The time domain reflectometer method in most mineral soils, without the need for calibration, with an accuracy ±0.01m 3 m -3 has a good performance in soil moisture and electrical conductivity measurements. The Diviner 2000 and EnviroScan are not well suitable for the above conditions for several reasons such as much higher soil moisture and a large error measurement and also its sensitivity to the soil gap and to the small change in the soil moisture in comparison with the neutron gauge and the time domain reflectometer methods.
International Nuclear Information System (INIS)
Hojjati, M.H.; Jafari, S.
2008-01-01
In this work, two powerful analytical methods, namely homotopy perturbation method (HPM) and Adomian's decomposition method (ADM), are introduced to obtain distributions of stresses and displacements in rotating annular elastic disks with uniform and variable thicknesses and densities. The results obtained by these methods are then compared with the verified variational iteration method (VIM) solution. He's homotopy perturbation method which does not require a 'small parameter' has been used and a homotopy with an imbedding parameter p element of [0,1] is constructed. The method takes the full advantage of the traditional perturbation methods and the homotopy techniques and yields a very rapid convergence of the solution. Adomian's decomposition method is an iterative method which provides analytical approximate solutions in the form of an infinite power series for nonlinear equations without linearization, perturbation or discretization. Variational iteration method, on the other hand, is based on the incorporation of a general Lagrange multiplier in the construction of correction functional for the equation. This study demonstrates the ability of the methods for the solution of those complicated rotating disk cases with either no or difficult to find fairly exact solutions without the need to use commercial finite element analysis software. The comparison among these methods shows that although the numerical results are almost the same, HPM is much easier, more convenient and efficient than ADM and VIM
A Cartesian Grid Embedded Boundary Method for Poisson's Equation on Irregular Domains
Johansen, Hans; Colella, Phillip
1998-11-01
We present a numerical method for solving Poisson's equation, with variable coefficients and Dirichlet boundary conditions, on two-dimensional regions. The approach uses a finite-volume discretization, which embeds the domain in a regular Cartesian grid. We treat the solution as a cell-centered quantity, even when those centers are outside the domain. Cells that contain a portion of the domain boundary use conservative differencing of second-order accurate fluxes on each cell volume. The calculation of the boundary flux ensures that the conditioning of the matrix is relatively unaffected by small cell volumes. This allows us to use multigrid iterations with a simple point relaxation strategy. We have combined this with an adaptive mesh refinement (AMR) procedure. We provide evidence that the algorithm is second-order accurate on various exact solutions and compare the adaptive and nonadaptive calculations.
Analysis of noise in energy-dispersive spectrometers using time-domain methods
Goulding, F S
2002-01-01
This paper presents an integrated time domain approach to the optimization of the signal-to-noise ratio in all spectrometer systems that contain a detector that converts incoming quanta of radiation into electrical pulse signals that are amplified and shaped by an electronic pulse shaper. It allows analysis of normal passive pulse shapers as well as time-variant systems where switching of shaping elements occurs in synchronism with the signal. It also deals comfortably with microcalorimeters (sometimes referred to as bolometers), where noise-determining elements, such as the temperature-sensing element's resistance and temperature, change with time in the presence of a signal. As part of the purely time-domain approach, a new method of calculating the Johnson noise in resistors using only the statistics of electron motion is presented. The result is a time-domain analog of the Nyquist formula.
Jang, Hae-Won; Ih, Jeong-Guon
2012-04-01
The time domain boundary element method (BEM) is associated with numerical instability that typically stems from the time marching scheme. In this work, a formulation of time domain BEM is derived to deal with all types of boundary conditions adopting a multi-input, multi-output, infinite impulse response structure. The fitted frequency domain impedance data are converted into a time domain expression as a form of an infinite impulse response filter, which can also invoke a modeling error. In the calculation, the response at each time step is projected onto the wave vector space of natural radiation modes, which can be obtained from the eigensolutions of the single iterative matrix. To stabilize the computation, unstable oscillatory modes are nullified, and the same decay rate is used for two nonoscillatory modes. As a test example, a transient sound field within a partially lined, parallelepiped box is used, within which a point source is excited by an octave band impulse. In comparison with the results of the inverse Fourier transform of a frequency domain BEM, the average of relative difference norm in the stabilized time response is found to be 4.4%.
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Jia, Shouqing; La, Dongsheng; Ma, Xuelian
2018-04-01
The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.
Directory of Open Access Journals (Sweden)
V. Dron'
2014-04-01
Full Text Available In the work an algorithm for establishing the existence of relationship between arbitrary socio-economic variables is given. The algorithm is based on the condition-consequence decomposition of events. It involves the construction of event-model and the using two classifications – types of interdependencies between events and types of relationships between their attributes.
Czech Academy of Sciences Publication Activity Database
Blaheta, Radim
2002-01-01
Roč. 9, 6/7 (2002), s. 525-550 ISSN 1070-5325 Grant - others:INCO Copernicus(XE) KIT977006 Institutional research plan: CEZ:AV0Z3086906 Keywords : elasticity * displacement decomposition Subject RIV: BA - General Mathematics Impact factor: 0.706, year: 2002
Energy Technology Data Exchange (ETDEWEB)
Darbar, Devendrasinh [School of Mechanical and Building Science, Vellore Institute of Technology (VIT), Vellore 632014, Tamil Nadu (India); Department of Mechanical Engineering, National University of Singapore, 117576 (Singapore); Department of Physics, National University of Singapore, 117542 (Singapore); Reddy, M.V., E-mail: phymvvr@nus.edu.sg [Department of Physics, National University of Singapore, 117542 (Singapore); Department of Materials Science and Engineering, National University of Singapore, 117546 (Singapore); Sundarrajan, S. [Department of Mechanical Engineering, National University of Singapore, 117576 (Singapore); Pattabiraman, R. [School of Mechanical and Building Science, Vellore Institute of Technology (VIT), Vellore 632014, Tamil Nadu (India); Ramakrishna, S. [Department of Mechanical Engineering, National University of Singapore, 117576 (Singapore); Chowdari, B.V.R. [Department of Physics, National University of Singapore, 117542 (Singapore)
2016-01-15
Highlights: • MgCo{sub 2}O{sub 4} was prepared by oxalate decomposition method and electrospinning technique. • Electrospun MgCo{sub 2}O{sub 4} shows the reversible capacity of 795 and 227 mAh g{sup −1} oxalate decomposition MgCo{sub 2}O{sub 4} after 50 cycle. • Electrospun MgCo{sub 2}O{sub 4} show good cycling stability and electrochemical performance. - Abstract: Magnesium cobalt oxide, MgCo{sub 2}O{sub 4} was synthesized by oxalate decomposition method and electrospinning technique. The electrochemical performances, structures, phase formation and morphology of MgCo{sub 2}O{sub 4} synthesized by both the methods are compared. Scanning electron microscope (SEM) studies show spherical and fiber type morphology, respectively for the oxalate decomposition and electrospinning method. The electrospun nanofibers of MgCo{sub 2}O{sub 4} calcined at 650 °C, showed a very good reversible capacity of 795 mAh g{sup −1} after 50 cycles when compared to bulk material capacity of 227 mAh g{sup −1} at current rate of 60 mA g{sup −1}. MgCo{sub 2}O{sub 4} nanofiber showed a reversible capacity of 411 mAh g{sup −1} (at cycle) at current density of 240 mA g{sup −1}. Improved performance was due to improved conductivity of MgO, which may act as buffer layer leading to improved cycling stability. The cyclic voltammetry studies at scan rate of 0.058 mV/s show main cathodic at around 1.0 V and anodic peaks at 2.1 V vs. Li.
International Nuclear Information System (INIS)
Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo
2007-01-01
Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)
The PolyMAX Frequency-Domain Method: A New Standard for Modal Parameter Estimation?
Directory of Open Access Journals (Sweden)
Bart Peeters
2004-01-01
Full Text Available Recently, a new non-iterative frequency-domain parameter estimation method was proposed. It is based on a (weighted least-squares approach and uses multiple-input-multiple-output frequency response functions as primary data. This so-called “PolyMAX” or polyreference least-squares complex frequency-domain method can be implemented in a very similar way as the industry standard polyreference (time-domain least-squares complex exponential method: in a first step a stabilisation diagram is constructed containing frequency, damping and participation information. Next, the mode shapes are found in a second least-squares step, based on the user selection of stable poles. One of the specific advantages of the technique lies in the very stable identification of the system poles and participation factors as a function of the specified system order, leading to easy-to-interpret stabilisation diagrams. This implies a potential for automating the method and to apply it to “difficult” estimation cases such as high-order and/or highly damped systems with large modal overlap. Some real-life automotive and aerospace case studies are discussed. PolyMAX is compared with classical methods concerning stability, accuracy of the estimated modal parameters and quality of the frequency response function synthesis.
Seismic response of three-dimensional topographies using a time-domain boundary element method
Janod, François; Coutant, Olivier
2000-08-01
We present a time-domain implementation for a boundary element method (BEM) to compute the diffraction of seismic waves by 3-D topographies overlying a homogeneous half-space. This implementation is chosen to overcome the memory limitations arising when solving the boundary conditions with a frequency-domain approach. This formulation is flexible because it allows one to make an adaptive use of the Green's function time translation properties: the boundary conditions solving scheme can be chosen as a trade-off between memory and cpu requirements. We explore here an explicit method of solution that requires little memory but a high cpu cost in order to run on a workstation computer. We obtain good results with four points per minimum wavelength discretization for various topographies and plane wave excitations. This implementation can be used for two different aims: the time-domain approach allows an easier implementation of the BEM in hybrid methods (e.g. coupling with finite differences), and it also allows one to run simple BEM models with reasonable computer requirements. In order to keep reasonable computation times, we do not introduce any interface and we only consider homogeneous models. Results are shown for different configurations: an explosion near a flat free surface, a plane wave vertically incident on a Gaussian hill and on a hemispherical cavity, and an explosion point below the surface of a Gaussian hill. Comparison is made with other numerical methods, such as finite difference methods (FDMs) and spectral elements.
Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.
Liu, Siwei; Molenaar, Peter
2016-01-01
This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.
Wu, Binlin
New near-infrared (NIR) diffuse optical tomography (DOT) approaches were developed to detect, locate, and image small targets embedded in highly scattering turbid media. The first approach, referred to as time reversal optical tomography (TROT), is based on time reversal (TR) imaging and multiple signal classification (MUSIC). The second approach uses decomposition methods of non-negative matrix factorization (NMF) and principal component analysis (PCA) commonly used in blind source separation (BSS) problems, and compare the outcomes with that of optical imaging using independent component analysis (OPTICA). The goal is to develop a safe, affordable, noninvasive imaging modality for detection and characterization of breast tumors in early growth stages when those are more amenable to treatment. The efficacy of the approaches was tested using simulated data, and experiments involving model media and absorptive, scattering, and fluorescent targets, as well as, "realistic human breast model" composed of ex vivo breast tissues with embedded tumors. The experimental arrangements realized continuous wave (CW) multi-source probing of samples and multi-detector acquisition of diffusely transmitted signal in rectangular slab geometry. A data matrix was generated using the perturbation in the transmitted light intensity distribution due to the presence of absorptive or scattering targets. For fluorescent targets the data matrix was generated using the diffusely transmitted fluorescence signal distribution from the targets. The data matrix was analyzed using different approaches to detect and characterize the targets. The salient features of the approaches include ability to: (a) detect small targets; (b) provide three-dimensional location of the targets with high accuracy (~within a millimeter or 2); and (c) assess optical strength of the targets. The approaches are less computation intensive and consequently are faster than other inverse image reconstruction methods that
Directory of Open Access Journals (Sweden)
SW Kang
2015-02-01
Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.
Frequency domain optical tomography using a conjugate gradient method without line search
International Nuclear Information System (INIS)
Kim, Hyun Keol; Charette, Andre
2007-01-01
A conjugate gradient method without line search (CGMWLS) is presented. This method is used to retrieve the local maps of absorption and scattering coefficients inside the tissue-like test medium, with the synthetic data. The forward problem is solved with a discrete-ordinates finite-difference method based on the frequency domain formulation of radiative transfer equation. The inversion results demonstrate that the CGMWLS can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by reducing cross-talk between absorption and scattering coefficients
Frequency domain method for the stack of seismic and radar data
Energy Technology Data Exchange (ETDEWEB)
Zhou, H; Sato, M [Tohoku University, Sendai (Japan); Xu, S
1997-10-22
With relation to the stacking method of elastic wave and radar wave, the frequency domain stacking method using the Fourier conversion was proposed as a method for automatically removing errors in time correction leaving advantages of the conventional horizontal stacking method. Concerning an example of wave motion with the same wave form and time difference, as a result of the analysis conducted by this method, it was found that not only effects are kept of suppressing random noise and regular noise in the conventional horizontal stacking method, but the resolution in the original wave motion data is kept. In the example, amplitude of the noise was a half of the wave motion signal, but if it is more than 0.85 times of the wave motion signal, favorable result cannot be obtained in this method. In the analysis in the area where time correction is very difficult and the correction cannot be made completely, it is useful also for the time domain stacking method to acquire data on high resolution of elastic wave and radar wave. 4 refs., 2 figs.
Directory of Open Access Journals (Sweden)
Beidou Xi
2012-06-01
Full Text Available China’s industry accounts for 46.8% of the national Gross Domestic Product (GDP and plays an important strategic role in its economic growth. On the other hand, industrial wastewater is also the major source of water pollution. In order to examine the relationship between the underlying driving forces and various environmental indicators, values of two critical industrial wastewater pollutant discharge parameters (Chemical Oxygen Demand (COD and ammonia nitrogen (NH_{4}-N, between 2001 and 2009, were decomposed into three factors: i.e., production effects (caused by change in the scale of economic activity, structure effects (caused by change in economic structure and intensity effects (caused by change in technological level of each sector, using additive version of the Logarithmic Mean Divisia Index (LMDI I decomposition method. Results showed that: (1 the average annual effect of COD discharges in China was −2.99%, whereas the production effect, the structure effect, and the intensity effect were 14.64%, −1.39%, and −16.24%, respectively. Similarly, the average effect of NH_{4}-N discharges was −4.03%, while the production effect, the structure effect, and the intensity effect were 16.18%, −2.88%, and −17.33%, respectively; (2 the production effect was the major factor responsible for the increase in COD and NH_{4}-N discharges, accounting for 45% and 44% of the total contribution, respectively; (3 the intensity effect, which accounted for 50% and 48% of the total contribution, respectively, exerted a dominant decremental effect on COD and NH_{4}-N discharges; intensity effect was further decomposed into cleaner production effect and pollution abatement effect with the cleaner production effect accounting for 60% and 55% of the reduction of COD and NH_{4}-N, respectively; (4 the major contributors to incremental COD and NH_{4}-N discharges were divided among industrial sub
2013-01-01
This Handbook provides comprehensive coverage of laser and coherent-domain methods as applied to biomedicine, environmental monitoring, and materials science. Worldwide leaders in these fields describe the fundamentals of light interaction with random media and present an overview of basic research. The latest results on coherent and polarization properties of light scattered by random media, including tissues and blood, speckles formation in multiple scattering media, and other non-destructive interactions of coherent light with rough surfaces and tissues, allow the reader to understand the principles and applications of coherent diagnostic techniques. The expanded second edition has been thoroughly updated with particular emphasis on novel coherent-domain techniques and their applications in medicine and environmental science. Volume 1 describes state-of-the-art methods of coherent and polarization optical imaging, tomography and spectroscopy; diffusion wave spectroscopy; elastic, quasi-elastic and inelasti...
2.5-D frequency-domain viscoelastic wave modelling using finite-element method
Zhao, Jian-guo; Huang, Xing-xing; Liu, Wei-fang; Zhao, Wei-jun; Song, Jian-yong; Xiong, Bin; Wang, Shang-xu
2017-10-01
2-D seismic modelling has notable dynamic information discrepancies with field data because of the implicit line-source assumption, whereas 3-D modelling suffers from a huge computational burden. The 2.5-D approach is able to overcome both of the aforementioned limitations. In general, the earth model is treated as an elastic material, but the real media is viscous. In this study, we develop an accurate and efficient frequency-domain finite-element method (FEM) for modelling 2.5-D viscoelastic wave propagation. To perform the 2.5-D approach, we assume that the 2-D viscoelastic media are based on the Kelvin-Voigt rheological model and a 3-D point source. The viscoelastic wave equation is temporally and spatially Fourier transformed into the frequency-wavenumber domain. Then, we systematically derive the weak form and its spatial discretization of 2.5-D viscoelastic wave equations in the frequency-wavenumber domain through the Galerkin weighted residual method for FEM. Fixing a frequency, the 2-D problem for each wavenumber is solved by FEM. Subsequently, a composite Simpson formula is adopted to estimate the inverse Fourier integration to obtain the 3-D wavefield. We implement the stiffness reduction method (SRM) to suppress artificial boundary reflections. The results show that this absorbing boundary condition is valid and efficient in the frequency-wavenumber domain. Finally, three numerical models, an unbounded homogeneous medium, a half-space layered medium and an undulating topography medium, are established. Numerical results validate the accuracy and stability of 2.5-D solutions and present the adaptability of finite-element method to complicated geographic conditions. The proposed 2.5-D modelling strategy has the potential to address modelling studies on wave propagation in real earth media in an accurate and efficient way.
Frequency domain fatigue damage estimation methods suitable for deterministic load spectra
Energy Technology Data Exchange (ETDEWEB)
Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)
2000-07-01
The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Directory of Open Access Journals (Sweden)
Kyoung-Rok Lee
2013-12-01
Full Text Available A floating Oscillating Water Column (OWC wave energy converter, a Backward Bent Duct Buoy (BBDB, was simulated using a state-of-the-art, two-dimensional, fully-nonlinear Numerical Wave Tank (NWT technique. The hydrodynamic performance of the floating OWC device was evaluated in the time domain. The acceleration potential method, with a full-updated kernel matrix calculation associated with a mode decomposition scheme, was implemented to obtain accurate estimates of the hydrodynamic force and displacement of a freely floating BBDB. The developed NWT was based on the potential theory and the boundary element method with constant panels on the boundaries. The mixed Eulerian-Lagrangian (MEL approach was employed to capture the nonlinear free surfaces inside the chamber that interacted with a pneumatic pressure, induced by the time-varying airflow velocity at the air duct. A special viscous damping was applied to the chamber free surface to represent the viscous energy loss due to the BBDB's shape and motions. The viscous damping coefficient was properly selected using a comparison of the experimental data. The calculated surface elevation, inside and outside the chamber, with a tuned viscous damping correlated reasonably well with the experimental data for various incident wave conditions. The conservation of the total wave energy in the computational domain was confirmed over the entire range of wave frequencies.
International Nuclear Information System (INIS)
Li, Ping; Zhou, Zhen; Xu, Hongbin; Zhang, Yi
2012-01-01
Highlights: ► Synthesis of Cr(OH) 3 nanoparticles in Cr 3+ –F − aqueous solution. ► The F − ion tailors coagulated materials, Cr(OH) 3 nanoparticles are obtained. ► Adding nanosized Cr(OH) 3 , AP thermal decomposition temperature decreases to 200 °C. ► The nanosized Cr(OH) 3 catalyzes NH 3 oxidation, accelerating AP thermal decomposition. - Abstract: A procedure for the preparation of spherical Cr(OH) 3 nanoparticles was developed based on the aging of chromium nitrate aqueous solutions in the presence of sodium fluoride, urea, and polyvinylpyrrolidone. Using scanning electron microscopy, transmission electron microscopy, and energy dispersive spectroscopy, the morphological characteristics of Cr(OH) 3 were controlled by altering the molar ratio of fluoride ion to chromium ion, as well as the initial pH and chromium ion concentration. The prepared nanosized Cr(OH) 3 decreased the temperature required to decompose ammonium perchlorate from 450 °C to about 250 °C as the catalyst. The possible catalytic mechanism of the thermal decomposition of ammonium perchlorate was also discussed.
Czech Academy of Sciences Publication Activity Database
Matoušek, Petr; Hodný, Zdeněk; Švandová, I.; Svoboda, Petr
2003-01-01
Roč. 81, č. 6 (2003), s. 365-372 ISSN 0829-8211 R&D Projects: GA MŠk LN00A026 Grant - others:Wellcome Trust(GB) xx Institutional research plan: CEZ:AV0Z5011922; CEZ:MSM 113100003; CEZ:AV0Z5039906 Keywords : membrane domain * G protein * two-dimensional electrophoresis * GPI-ancored proteins Subject RIV: CE - Biochemistry Impact factor: 2.456, year: 2003
A Compact Unconditionally Stable Method for Time-Domain Maxwell's Equations
Directory of Open Access Journals (Sweden)
Zhuo Su
2013-01-01
Full Text Available Higher order unconditionally stable methods are effective ways for simulating field behaviors of electromagnetic problems since they are free of Courant-Friedrich-Levy conditions. The development of accurate schemes with less computational expenditure is desirable. A compact fourth-order split-step unconditionally-stable finite-difference time-domain method (C4OSS-FDTD is proposed in this paper. This method is based on a four-step splitting form in time which is constructed by symmetric operator and uniform splitting. The introduction of spatial compact operator can further improve its performance. Analyses of stability and numerical dispersion are carried out. Compared with noncompact counterpart, the proposed method has reduced computational expenditure while keeping the same level of accuracy. Comparisons with other compact unconditionally-stable methods are provided. Numerical dispersion and anisotropy errors are shown to be lower than those of previous compact unconditionally-stable methods.
White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki
2017-01-01
The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on
Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire
2017-02-01
In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.
DEFF Research Database (Denmark)
Santillan, Arturo Orozco
2011-01-01
The aim of the work described in this paper has been to investigate the use of the finite-difference time-domain method to describe the interactions between a moving object and a sound field. The main objective was to simulate oscillational instabilities that appear in single-axis acoustic...... levitation devices and to describe their evolution in time to further understand the physical mechanism involved. The study shows that the method gives accurate results for steady state conditions, and that it is a promising tool for simulations with a moving object....
A wavelet domain adaptive image watermarking method based on chaotic encryption
Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun
2009-10-01
A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.
Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)
2016-07-07
For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.
A method of producing garnet materials for use in circular magnetic domain devices
International Nuclear Information System (INIS)
Gill, G.P.
1976-01-01
A method is described for producing iron garnet materials for use in circular magnetic domain devices. It comprises providing material having complex domain wall behaviour, and implanting ions having an atomic number of at least 15 into the material. The energy and dose of the ions are such that the lattice is expanded and its crystallinity preserved, and the lattice expansion is such that the complex domain wall behaviour is substantially eliminated. The ions should have an energy in the range 100 to 500 keV and the dose should be in the range 10 12 to 10 14 ions/cm 2 . The implanted ions may be Ar, Sm, Te, or Lu. It is thought that the use of rare earth ions allows the magnetostriction constant of the implanted ion to operate in addition to that of the implanted garnet. An advantage of the method is that doses used for implantation using Ar or rare earth ions are less than for implantation using lighter ions, thereby allowing implantations to be performed in a shorter time for the same beam currency density. (UK)
2004-01-01
For the first time in one set of books, coherent-domain optical methods are discussed in the framework of various applications, which are characterized by a strong light scattering. A few chapters describe basic research containing the updated results on coherent and polarized light non-destructive interactions with a scattering medium, in particular, diffraction, interference, and speckle formation at multiple scattering. These chapters allow for understanding coherent-domain diagnostic techniques presented in later chapters. A large portion of Volume I is dedicated to analysis of various aspects of optical coherence tomography (OCT) - a very new and growing field of coherent optics. Two chapters on laser scanning confocal microscopy give insight to recent extraordinary results on in vivo imaging and compare the possibilities and achievements of confocol, excitation multiphoton, and OCT microscopy. This two volume reference contains descriptions of holography, interferometry and optical heterodyning techniqu...
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Use of the finite-difference time-domain method in electromagnetic dosimetry
International Nuclear Information System (INIS)
Sullivan, D.M.
1987-01-01
Although there are acceptable methods for calculating whole body electromagnetic absorption, no completely acceptable method for calculating the local specific absorption rate (SAR) at points within the body has been developed. Frequency domain methods, such as the method of moments (MoM) have achieved some success; however, the MoM requires computer storage on the order of (3N) 2 , and computation time on the order of (3N) 3 where N is the number of cells. The finite-difference time-domain (FDTD) method has been employed extensively in calculating the scattering from metallic objects, and recently is seeing some use in calculating the interaction of EM fields with complex, lossy dielectric bodies. Since the FDTD method has storage and time requirements proportional to N, it presents an attractive alternative to calculating SAR distribution in large bodies. This dissertation describes the FDTD method and evaluates it by comparing its results with analytic solutions in 2 and 3 dimensions. The results obtained demonstrate that the FDTD method is capable of calculating internal SAR distribution with acceptable accuracy. The construction of a data base to provide detailed, inhomogeneous man models for use with the FDTD method is described. Using this construction method, a model of 40,000 1.31 cm. cells is developed for use at 350 MHz, and another model consisting of 5000 2.62 cm. cells is developed for use at 100 MHz. To add more realism to the problem, a ground plane is added to the FDTD software. The needed changes to the software are described, along with a test which confirms its accuracy. Using the CRAY II supercomputer, SAR distributions in human models are calculated using incident frequencies of 100 MHz and 350 MHz for three different cases: (1) A homogeneous man model in free space, (2) an inhomogeneous man model in free space, and (3) an inhomogeneous man model standing on a ground plane
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
International Nuclear Information System (INIS)
Boergers, C.; Peskin, C.S.
1987-01-01
In the Lagrangian fractional step method introduced in this paper, the fluid velocity and pressure are defined on a collection of N fluid markers. At each time step, these markers are used to generate a Voronoi diagram, and this diagram is used to construct finite-difference operators corresponding to the divergence, gradient, and Laplacian. The splitting of the Navier--Stokes equations leads to discrete Helmholtz and Poisson problems, which we solve using a two-grid method. The nonlinear convection terms are modeled simply by the displacement of the fluid markers. We have implemented this method on a periodic domain in the plane. We describe an efficient algorithm for the numerical construction of periodic Voronoi diagrams, and we report on numerical results which indicate the the fractional step method is convergent of first order. The overall work per time step is proportional to N log N. copyright 1987 Academic Press, Inc
A new Monte Carlo method for neutron noise calculations in the frequency domain
International Nuclear Information System (INIS)
Rouchon, Amélie; Zoia, Andrea; Sanchez, Richard
2017-01-01
Neutron noise equations, which are obtained by assuming small perturbations of macroscopic cross sections around a steady-state neutron field and by subsequently taking the Fourier transform in the frequency domain, have been usually solved by analytical techniques or by resorting to diffusion theory. A stochastic approach has been recently proposed in the literature by using particles with complex-valued weights and by applying a weight cancellation technique. We develop a new Monte Carlo algorithm that solves the transport neutron noise equations in the frequency domain. The stochastic method presented here relies on a modified collision operator and does not need any weight cancellation technique. In this paper, both Monte Carlo methods are compared with deterministic methods (diffusion in a slab geometry and transport in a simplified rod model) for several noise frequencies and for isotropic and anisotropic noise sources. Our stochastic method shows better performances in the frequency region of interest and is easier to implement because it relies upon the conventional algorithm for fixed-source problems.
International Nuclear Information System (INIS)
Saravanan, R.; Santhi, Kalavathy; Sivakumar, N.; Narayanan, V.; Stephen, A.
2012-01-01
Zinc oxide nanorods and diluted magnetic semiconducting Ni doped ZnO nanorods were prepared by thermal decomposition method. This method is simple and cost effective. The decomposition temperature of acetate and formation of oxide were determined by TGA before the actual synthesis process. The X-ray diffraction result indicates the single phase hexagonal structure of zinc oxide. The transmission electron microscopy and scanning electron microscopy images show rod like structure of ZnO and Ni doped ZnO samples with the diameter ∼ 35 nm and the length in few micrometers. The surface analysis was performed using X-ray photoelectron spectroscopic studies. The Ni doped ZnO exhibits room temperature ferromagnetism. This diluted magnetic semiconducting Ni doped ZnO nanorods finds its application in spintronics. - Highlights: ► The method used is very simple and cost effective compared to all other methods for the preparation DMS materials. ► ZnO and Ni doped ZnO nanorods ► Ferromagnetism at room temperature
Hybrid Fourier pseudospectral/discontinuous Galerkin time-domain method for wave propagation
Pagán Muñoz, Raúl; Hornikx, Maarten
2017-11-01
The Fourier Pseudospectral time-domain (Fourier PSTD) method was shown to be an efficient way of modelling acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly staircase-like boundary shapes. This paper presents a hybrid approach to solve the LEE, coupling Fourier PSTD with a nodal Discontinuous Galerkin (DG) method. DG exhibits almost no restrictions with respect to geometrical complexity or boundary conditions. The aim of this novel method is to allow the computation of complex geometries and to be a step towards the implementation of frequency dependent boundary conditions by using the benefits of DG at the boundaries, while keeping the efficient Fourier PSTD in the bulk of the domain. The hybridization approach is based on conformal meshes to avoid spatial interpolation of the DG solutions when transferring values from DG to Fourier PSTD, while the data transfer from Fourier PSTD to DG is done utilizing spectral interpolation of the Fourier PSTD solutions. The accuracy of the hybrid approach is presented for one- and two-dimensional acoustic problems and the main sources of error are investigated. It is concluded that the hybrid methodology does not introduce significant errors compared to the Fourier PSTD stand-alone solver. An example of a cylinder scattering problem is presented and accurate results have been obtained when using the proposed approach. Finally, no instabilities were found during long-time calculation using the current hybrid methodology on a two-dimensional domain.
A method for calculation of finite fatigue life under multiaxial loading in high-cycle domain
Directory of Open Access Journals (Sweden)
M. Malnati
2014-04-01
Full Text Available A method for fatigue life assessment in high-cycle domain under multiaxial loading is presented in this paper. This approach allows fatigue assessment under any kind of load history, without limitations. The methodology lies on the construction - at a macroscopic level - of an “indicator” in the form of a set of cycles, representing plasticity that can arise at mesoscopic level throughout fatigue process. During the advancement of the loading history new cycles are created and a continuous evaluation of the damage is made.
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
Energy Technology Data Exchange (ETDEWEB)
Patel, Saumil S. [Argonne National Lab. (ANL), Argonne, IL (United States); Fischer, Paul F. [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Illinois, Urbana-Champaign, IL (United States); Min, Misun [Argonne National Lab. (ANL), Argonne, IL (United States); Tomboulides, Ananias G [Argonne National Lab. (ANL), Argonne, IL (United States); Aristotle Univ., Thessaloniki (Greece)
2017-10-21
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
Merrikh-Bayat, Farshad
2011-04-01
One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Evett, S.R.
2000-01-01
Soil-water measurements encounter particular problems related to the physics of the method used. For time domain reflectometry (TDR), these relate to wave form shape changes caused by soil, soil water, and TDR probe properties. Methods of wave form interpretation that overcome these problems are discussed and specific computer algorithms are presented. Neutron scattering is well understood, but calibration methods remain critical to accuracy and precision, and are discussed with recommendations for field calibration and use. Capacitance probes tend to exhibit very small radii of influence, thus are sensitive to small-scale changes in soil properties, and are difficult or impossible to field calibrate. Field comparisons of neutron and capacitance probes are presented. (author)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC
International Nuclear Information System (INIS)
De-León-Prado, Laura Elena; Cortés-Hernández, Dora Alicia; Almanza-Robles, José Manuel; Escobedo-Bocardo, José Concepción; Sánchez, Javier; Reyes-Rdz, Pamela Yajaira; Jasso-Terán, Rosario Argentina; Hurtado-López, Gilberto Francisco
2017-01-01
This work reports the synthesis of Mg x Mn 1−x Fe 2 O 4 (x=0–1) nanoparticles by both sol-gel and thermal decomposition methods. In order to determine the effect of synthesis conditions on the crystal structure and magnetic properties of the ferrites, the synthesis was carried out varying some parameters, including composition. By both methods it was possible to obtain ferrites having a single crystalline phase with cubic inverse spinel structure and a behavior near to that of superparamagnetic materials. Saturation magnetization values were higher for materials synthesized by sol-gel. Furthermore, in both cases particles have a spherical-like morphology and nanometric sizes (11–15 nm). Therefore, these materials can be used as thermoseeds for the treatment of cancer by magnetic hyperthermia. - Highlights: • Mg–Mn ferrites were synthesized by sol-gel and thermal decomposition methods. • Materials showed a single cubic inverse spinel crystalline structure. • Ferrites have a soft ferrimagnetic behavior close to superparamagnetic materials.
Combining biophysical methods to analyze the disulfide bond in SH2 domain of C-terminal Src kinase.
Liu, Dongsheng; Cowburn, David
2016-01-01
The Src Homology 2 (SH2) domain is a structurally conserved protein domain that typically binds to a phosphorylated tyrosine in a peptide motif from the target protein. The SH2 domain of C-terminal Src kinase (Csk) contains a single disulfide bond, which is unusual for most SH2 domains. Although the global motion of SH2 domain regulates Csk function, little is known about the relationship between the disulfide bond and binding of the ligand. In this study, we combined X-ray crystallography, solution NMR, and other biophysical methods to reveal the interaction network in Csk. Denaturation studies have shown that disulfide bond contributes significantly to the stability of SH2 domain, and crystal structures of the oxidized and C122S mutant showed minor conformational changes. We further investigated the binding of SH2 domain to a phosphorylated peptide from Csk-binding protein upon reduction and oxidation using both NMR and fluorescence approaches. This work employed NMR, X-ray cryptography, and other biophysical methods to study a disulfide bond in Csk SH2 domain. In addition, this work provides in-depth understanding of the structural dynamics of Csk SH2 domain.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas; Osher, Stanley; Schö nlieb, Carola-Bibiane
2012-01-01
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems
3D airborne EM modeling based on the spectral-element time-domain (SETD) method
Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.
2017-12-01
In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays
From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy
Laycock, Silas G. T.
2017-07-01
In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.
Curvelet-domain multiple matching method combined with cubic B-spline function
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Time-domain least-squares migration using the Gaussian beam summation method
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
The development of efficient numerical time-domain modeling methods for geophysical wave propagation
Zhu, Lieyuan
This Ph.D. dissertation focuses on the numerical simulation of geophysical wave propagation in the time domain including elastic waves in solid media, the acoustic waves in fluid media, and the electromagnetic waves in dielectric media. This thesis shows that a linear system model can describe accurately the physical processes of those geophysical waves' propagation and can be used as a sound basis for modeling geophysical wave propagation phenomena. The generalized stability condition for numerical modeling of wave propagation is therefore discussed in the context of linear system theory. The efficiency of a series of different numerical algorithms in the time-domain for modeling geophysical wave propagation are discussed and compared. These algorithms include the finite-difference time-domain method, pseudospectral time domain method, alternating directional implicit (ADI) finite-difference time domain method. The advantages and disadvantages of these numerical methods are discussed and the specific stability condition for each modeling scheme is carefully derived in the context of the linear system theory. Based on the review and discussion of these existing approaches, the split step, ADI pseudospectral time domain (SS-ADI-PSTD) method is developed and tested for several cases. Moreover, the state-of-the-art stretched-coordinate perfect matched layer (SCPML) has also been implemented in SS-ADI-PSTD algorithm as the absorbing boundary condition for truncating the computational domain and absorbing the artificial reflection from the domain boundaries. After algorithmic development, a few case studies serve as the real-world examples to verify the capacities of the numerical algorithms and understand the capabilities and limitations of geophysical methods for detection of subsurface contamination. The first case is a study using ground penetrating radar (GPR) amplitude variation with offset (AVO) for subsurface non-aqueous-liquid (NAPL) contamination. The
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
Energy Technology Data Exchange (ETDEWEB)
Okuda, Mitsunobu, E-mail: okuda.m-ky@nhk.or.jp; Miyamoto, Yasuyoshi; Miyashita, Eiichi; Hayashi, Naoto [NHK Science and Technology Research Laboratories, 1-10-11 Kinuta Setagaya, Tokyo 157-8510 (Japan)
2014-05-07
Current-driven magnetic domain wall motions in magnetic nanowires have attracted great interests for physical studies and engineering applications. The magnetic force microscope (MFM) is widely used for indirect verification of domain locations in nanowires, where relative magnetic force between the local domains and the MFM probe is used for detection. However, there is an occasional problem that the magnetic moments of MFM probe influenced and/or rotated the magnetic states in the low-moment nanowires. To solve this issue, the “magnetic domain scope for wide area with nano-order resolution (nano-MDS)” method has been proposed recently that could detect the magnetic flux distribution from the specimen directly by scanning of tunneling magnetoresistive field sensor. In this study, magnetic domain structure in nanowires was investigated by both MFM and nano-MDS, and the leakage magnetic flux density from the nanowires was measured quantitatively by nano-MDS. Specimen nanowires consisted from [Co (0.3)/Pd (1.2)]{sub 21}/Ru(3) films (units in nm) with perpendicular magnetic anisotropy were fabricated onto Si substrates by dual ion beam sputtering and e-beam lithography. The length and the width of the fabricated nanowires are 20 μm and 150 nm. We have succeeded to obtain not only the remanent domain images with the detection of up and down magnetizations as similar as those by MFM but also magnetic flux density distribution from nanowires directly by nano-MDS. The obtained value of maximum leakage magnetic flux by nano-MDS is in good agreement with that of coercivity by magneto-optical Kerr effect microscopy. By changing the protective diamond-like-carbon film thickness on tunneling magnetoresistive sensor, the three-dimensional spatial distribution of leakage magnetic flux could be evaluated.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
A frequency-domain method for solving linear time delay systems with constant coefficients
Jin, Mengshi; Chen, Wei; Song, Hanwen; Xu, Jian
2018-03-01
In an active control system, time delay will occur due to processes such as signal acquisition and transmission, calculation, and actuation. Time delay systems are usually described by delay differential equations (DDEs). Since it is hard to obtain an analytical solution to a DDE, numerical solution is of necessity. This paper presents a frequency-domain method that uses a truncated transfer function to solve a class of DDEs. The theoretical transfer function is the sum of infinite items expressed in terms of poles and residues. The basic idea is to select the dominant poles and residues to truncate the transfer function, thus ensuring the validity of the solution while improving the efficiency of calculation. Meanwhile, the guideline of selecting these poles and residues is provided. Numerical simulations of both stable and unstable delayed systems are given to verify the proposed method, and the results are presented and analysed in detail.
Kaneko, Hiromasa
2018-02-26
To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.
International Nuclear Information System (INIS)
Wei Biao; Feng Peng; Yang Fan; Ren Yong
2014-01-01
To deal with the disadvantages of the homogeneous signature of the nuclear material identification system (NMIS) and limited methods to extract the characteristic parameters of the nuclear materials, an enhanced method using the combination of the Time-of-Flight (TOF) and the Pulse Shape Discrimination (PSD) was introduced into the traditional characteristic parameters extraction and recognition system of the NMIS. With the help of the PSD, the γ signal and the neutron signal can be discriminated. Further based on the differences of the neutron-γ flight time of the detectors in various positions, a new time-domain signature reflecting the position information of unknown nuclear material was investigated. The simulation result showed that the algorithm is feasible and helpful to identify the relative position of unknown nuclear material. (authors)
A time-domain method to generate artificial time history from a given reference response spectrum
Energy Technology Data Exchange (ETDEWEB)
Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2016-06-15
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.
Gonçalves, Rui; Farzamian, Mohammad; Monteiro Santos, Fernando A.; Represas, Patrícia; Mota Gomes, A.; Lobo de Pina, A. F.; Almeida, Eugénio P.
2017-11-01
Santiago Island, the biggest and most populated island of the Cape Verde Republic, is characterised by limited surface waters and strong dependence on groundwater sources as the primary source of natural water supply for extensive agricultural activity and human use. However, as a consequence of the scarce precipitation and high evaporation as well as the intense overexploitation of the groundwater resources, the freshwater management is also in a delicate balance with saltwater at coastal areas. The time-domain electromagnetic (TDEM) method is used to locate the extent of saltwater intrusion in four important agricultural regions in Santiago Island; São Domingos, Santa Cruz, São Miguel, and Tarrafal. The application of this method in Santiago Island proves it to be a successful tool in imaging the fresh/saltwater interface location. Depths to the saline zones and extensions of saline water are mapped along eight TDEM profiles.
A time-domain method to generate artificial time history from a given reference response spectrum
International Nuclear Information System (INIS)
Shin, Gang Sik; Song, Oh Seop
2016-01-01
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance
Simulation of acoustic streaming by means of the finite-difference time-domain method
DEFF Research Database (Denmark)
Santillan, Arturo Orozco
2012-01-01
Numerical simulations of acoustic streaming generated by a standing wave in a narrow twodimensional cavity are presented. In this case, acoustic streaming arises from the viscous boundary layers set up at the surfaces of the walls. It is known that streaming vortices inside the boundary layer have...... directions of rotation that are opposite to those of the outer streaming vortices (Rayleigh streaming). The general objective of the work described in this paper has been to study the extent to which it is possible to simulate both the outer streaming vortices and the inner boundary layer vortices using...... the finite-difference time-domain method. To simplify the problem, thermal effects are not considered. The motivation of the described investigation has been the possibility of using the numerical method to study acoustic streaming, particularly under non-steady conditions. Results are discussed for channels...
International Nuclear Information System (INIS)
Liu, J.; Lan, T.; Qin, H.
2017-01-01
Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.
An Improved Filtering Method for Quantum Color Image in Frequency Domain
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
A hybrid method of estimating pulsating flow parameters in the space-time domain
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto
2016-05-01
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
International Nuclear Information System (INIS)
Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli
2016-01-01
Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid
Pepper, Lauren R; Parthasarathy, Ranganath; Robbins, Gregory P; Dang, Nicholas N; Hammer, Daniel A; Boder, Eric T
2013-08-01
The inserted (I) domain of αLβ2 integrin (LFA-1) contains the entire binding site of the molecule. It mediates both rolling and firm adhesion of leukocytes at sites of inflammation depending on the activation state of the integrin. The affinity change of the entire integrin can be mimicked by the I domain alone through mutations that affect the conformation of the molecule. High-affinity mutants of the I domain have been discovered previously using both rational design and directed evolution. We have found that binding affinity fails to dictate the behavior of I domain adhesion under shear flow. In order to better understand I domain adhesion, we have developed a novel panning method to separate yeast expressing a library of I domain variants on the surface by adhesion under flow. Using conditions analogous to those experienced by cells interacting with the post-capillary vascular endothelium, we have identified mutations supporting firm adhesion that are not found using typical directed evolution techniques that select for tight binding to soluble ligands. Mutants isolated using this method do not cluster with those found by sorting with soluble ligand. Furthermore, these mutants mediate shear-driven cell rolling dynamics decorrelated from binding affinity, as previously observed for I domains bearing engineered disulfide bridges to stabilize activated conformational states. Characterization of these mutants supports a greater understanding of the structure-function relationship of the αL I domain, and of the relationship between applied force and bioadhesion in a broader context.
Holzinger, Andreas; Zupan, Mario
2013-06-13
Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.
International Nuclear Information System (INIS)
Sánchez, Javier; Cortés-Hernández, Dora Alicia; Escobedo-Bocardo, José Concepción; Almanza-Robles, José Manuel; Reyes-Rodríguez, Pamela Yajaira; Jasso-Terán, Rosario Argentina; Bartolo-Pérez, Pascual; De-León-Prado, Laura Elena
2017-01-01
In this work, the synthesis of Mn x Ga 1−x Fe 2 O 4 (x=0–1) nanosized particles by thermal decomposition method, using tetraethylene glycol (TEG) as a reaction medium, has been performed. The crystalline structure of the inverse spinel obtained in all the cases was identified by X-ray diffraction (XRD). Vibration sample magnetometry (VSM) was used to evaluate the magnetic properties of ferrites and to demonstrate their superparamagnetic behavior and the increase of magnetization values due to the Mn 2+ ions incorporation into the FeGa 2 O 4 structure. Transmission electron microscopy, energy dispersive spectroscopy (TEM-EDS) and X-ray photoelectron spectroscopy (XPS) were used to characterize the obtained magnetic nanoparticles (MNPs). These MNPs showed a near spherical morphology, an average particle size of 5.6±1.5 nm and a TEG coating layer on their surface. In all the cases MNPs showed no response when submitted to an alternating magnetic field (AMF, 10.2 kA/m, 354 kHz) using magnetic induction tests. These results suggest that the synthesized nanoparticles can be potential candidates for their use in biomedical areas. - Highlights: • Superparamagnetic NPs of Mn x Ga 1−x Fe 2 O 4 were synthesized by thermal decomposition. • Saturation magnetization of MnGaFe 2 O 4 increases as Mn ions are increased. • Nanoparticles have a nanometric size of 5.6 nm and show no heating ability.
Directory of Open Access Journals (Sweden)
Shichun Xu
2017-09-01
Full Text Available We decompose factors affecting China’s energy-related air pollutant (NOx, PM2.5, and SO2 emission changes into different effects using structural decomposition analysis (SDA. We find that, from 2005 to 2012, investment increased NOx, PM2.5, and SO2 emissions by 14.04, 7.82 and 15.59 Mt respectively, and consumption increased these emissions by 11.09, 7.98, and 12.09 Mt respectively. Export and import slightly increased the emissions on the whole, but the rate of the increase has slowed down, possibly reflecting the shift in China’s foreign trade structure. Energy intensity largely reduced NOx, PM2.5, and SO2 emissions by 12.49, 14.33 and 23.06 Mt respectively, followed by emission efficiency that reduces these emissions by 4.57, 9.08, and 17.25 Mt respectively. Input-output efficiency slightly reduces the emissions. At sectoral and sub-sectoral levels, consumption is a great driving factor in agriculture and commerce, whereas investment is a great driving factor in transport, construction, and some industrial subsectors such as iron and steel, nonferrous metals, building materials, coking, and power and heating supply. Energy intensity increases emissions in transport, chemical products and manufacturing, but decreases emissions in all other sectors and subsectors. Some policies arising from our study results are discussed.
International Nuclear Information System (INIS)
Vega, Jaime; Picasso, Gino; Lopez, Alcides; Aviles Felix, Luis
2013-01-01
In this work, nanoparticles based on magnetite have been synthesized by thermal decomposition via solvent-controlled synthesis in polyols, using triethylene glycol (TREG). The starting precursor were solutions of nitrate and acetylacetonate of Fe. The samples have been characterized by X-ray diffraction technique (XRD), adsorption-desorption of N 2 (BET equation model), scanning electronic microscopy (SEM), thermogravimetric analysis (TGA), vibration sample magnetometry (VSM) and Moessbauer spectroscopy. XRD diffractogram revealed the majority presence of spinel-like structural phases of magnetite in all samples. SEM micrographs showed morphological differences; the samples prepared from acetylacetonate presented good dispersion of particles whereas the ones prepared from nitrate-small agglomerations. BET isotherms of samples depicted a mesoporous profile which corresponded to IV type. TGA thermogram showed two defined regions which corresponded to vaporization of polyol light fractions and TREG. Zero coercivity on the magnetization curve of acetylacetonate precursor samples have been observed by VSM, which indicates superparamagnetic behavior. Moessbauer spectra of samples detected the presence of 4 doublet-like subspectra due to the presence of 4 sites occupied by Fe in paramagnetic or superparamagnetic state. (author)