Preface: Introductory Remarks: Linear Scaling Methods
Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.
2008-07-01
It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up
Polarized atomic orbitals for linear scaling methods
Berghold, Gerd; Parrinello, Michele; Hutter, Jürg
2002-02-01
We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.
Linear-scaling quantum mechanical methods for excited states.
Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua
2012-05-21
The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and
The linearly scaling 3D fragment method for large scale electronic structure calculations
Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)
2009-07-01
The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Elongation cutoff technique armed with quantum fast multipole method for linear scaling.
Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko
2009-11-30
A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate
Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi
2017-11-05
We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Linear-scaling explicitly correlated treatment of solids: Periodic local MP2-F12 method
Usvyat, Denis, E-mail: denis.usvyat@chemie.uni-regensburg.de [Institute of Physical and Theoretical Chemistry, University of Regensburg, Universitätsstraße 31, D-93040 Regensburg (Germany)
2013-11-21
Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.
Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)
2013-02-07
Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.
Linear scaling of density functional algorithms
Stechel, E.B.; Feibelman, P.J.; Williams, A.R.
1993-01-01
An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm
Explorative methods in linear models
Høskuldsson, Agnar
2004-01-01
The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in
Andersen, O. Krogh
1975-01-01
of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...
Linear Methods for Image Interpolation
Pascal Getreuer
2011-01-01
We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
R. Talebitooti
Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.
Optimal control linear quadratic methods
Anderson, Brian D O
2007-01-01
This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the
Variational linear algebraic equations method
Moiseiwitsch, B.L.
1982-01-01
A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)
Bayes linear statistics, theory & methods
Goldstein, Michael
2007-01-01
Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...
Linear Methods for Image Interpolation
Pascal Getreuer
2011-09-01
Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.
Cawkwell, M J; Niklasson, Anders M N
2012-10-07
Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.
Linear Algebraic Method for Non-Linear Map Analysis
Yu, L.; Nash, B.
2009-01-01
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
A linear iterative unfolding method
László, András
2012-01-01
A frequently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing, due to the well-known numerical ill behavior of this task. Various methods were invented which, given some assumptions on the initial probability distribution, try to regularize the unfolding problem. Most of these methods definitely introduce bias into the estimate of the initial probability distribution. We propose a linear iterative method (motivated by the Neumann series / Landweber iteration known in functional analysis), which has the advantage that no assumptions on the initial probability distribution is needed, and the only regularization parameter is the stopping order of the iteration, which can be used to choose the best compromise between the introduced bias and the propagated statistical and systematic errors. The method is consistent: 'binwise' convergence to the initial probability distribution is proved in absence of measurement errors under a quite general condition on the response function. This condition holds for practical applications such as convolutions, calorimeter response functions, momentum reconstruction response functions based on tracking in magnetic field etc. In presence of measurement errors, explicit formulae for the propagation of the three important error terms is provided: bias error (distance from the unknown to-be-reconstructed initial distribution at a finite iteration order), statistical error, and systematic error. A trade-off between these three error terms can be used to define an optimal iteration stopping criterion, and the errors can be estimated there. We provide a numerical C library for the implementation of the method, which incorporates automatic
Frequency scaling of linear super-colliders
Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.
1986-06-01
The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength
Guo, Yang
2018-01-04
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)
2016-03-07
Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed
Novel algorithm of large-scale simultaneous linear equations
Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L
2010-01-01
We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.
A convex optimization approach for solving large scale linear systems
Debora Cores
2017-01-01
Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
Graph-based linear scaling electronic structure theory
Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Yi-hua Zhong
2013-01-01
Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Two linearization methods for atmospheric remote sensing
Doicu, A.; Trautmann, T.
2009-01-01
We present two linearization methods for a pseudo-spherical atmosphere and general viewing geometries. The first approach is based on an analytical linearization of the discrete ordinate method with matrix exponential and incorporates two models for matrix exponential calculation: the matrix eigenvalue method and the Pade approximation. The second method referred to as the forward-adjoint approach is based on the adjoint radiative transfer for a pseudo-spherical atmosphere. We provide a compact description of the proposed methods as well as a numerical analysis of their accuracy and efficiency.
Supervised scale-regularized linear convolutionary filters
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
J.F. Sturm; J. Zhang (Shuzhong)
1996-01-01
textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;
Sparsity Prevention Pivoting Method for Linear Programming
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper....... The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
Sparsity Prevention Pivoting Method for Linear Programming
Li, Peiqiang; Li, Qiyuan; Li, Canbing
2018-01-01
. The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following......When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...
The linearization method in hydrodynamical stability theory
Yudovich, V I
1989-01-01
This book presents the theory of the linearization method as applied to the problem of steady-state and periodic motions of continuous media. The author proves infinite-dimensional analogues of Lyapunov's theorems on stability, instability, and conditional stability for a large class of continuous media. In addition, semigroup properties for the linearized Navier-Stokes equations in the case of an incompressible fluid are studied, and coercivity inequalities and completeness of a system of small oscillations are proved.
Efficient decomposition and linearization methods for the stochastic transportation problem
Holmberg, K.
1993-01-01
The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)
Non-linear M -sequences Generation Method
Z. R. Garifullina
2011-06-01
Full Text Available The article deals with a new method for modeling a pseudorandom number generator based on R-blocks. The gist of the method is the replacement of a multi digit XOR element by a stochastic adder in a parallel binary linear feedback shift register scheme.
Uzawa method for fuzzy linear system
Ke Wang
2013-01-01
An Uzawa method is presented for solving fuzzy linear systems whose coefficient matrix is crisp and the right-hand side column is arbitrary fuzzy number vector. The explicit iterative scheme is given. The convergence is analyzed with convergence theorems and the optimal parameter is obtained. Numerical examples are given to illustrate the procedure and show the effectiveness and efficiency of the method.
Troen, Ib; Bechmann, Andreas; Kelly, Mark C.
2014-01-01
Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Decentralised stabilising controllers for a class of large-scale linear ...
subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Recent development of linear scaling quantum theories in GAMESS
Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)
2003-06-01
Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.
The simplex method of linear programming
Ficken, Frederick A
1961-01-01
This concise but detailed and thorough treatment discusses the rudiments of the well-known simplex method for solving optimization problems in linear programming. Geared toward undergraduate students, the approach offers sufficient material for readers without a strong background in linear algebra. Many different kinds of problems further enrich the presentation. The text begins with examinations of the allocation problem, matrix notation for dual problems, feasibility, and theorems on duality and existence. Subsequent chapters address convex sets and boundedness, the prepared problem and boun
Penalized Estimation in Large-Scale Generalized Linear Array Models
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs
Kiwiel, K. C.
1998-01-01
We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)
Sodium flow rate measurement method of annular linear induction pump
Araseki, Hideo
2011-01-01
This report describes a method for measuring sodium flow rate of annular linear induction pumps arranged in parallel and its verification result obtained through an experiment and a numerical analysis. In the method, the leaked magnetic field is measured with measuring coils at the stator end on the outlet side and is correlated with the sodium flow rate. The experimental data and the numerical result indicate that the leaked magnetic field at the stator edge keeps almost constant when the sodium flow rate changes and that the leaked magnetic field change arising from the flow rate change is small compared with the overall leaked magnetic field. It is shown that the correlation between the leaked magnetic field and the sodium flow rate is almost linear due to this feature of the leaked magnetic field, which indicates the applicability of the method to small-scale annular linear induction pumps. (author)
Common Nearly Best Linear Estimates of Location and Scale ...
Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...
New nonlinear methods for linear transport calculations
Adams, M.L.
1993-01-01
We present a new family of methods for the numerical solution of the linear transport equation. With these methods an iteration consists of an 'S N sweep' followed by an 'S 2 -like' calculation. We show, by analysis as well as numerical results, that iterative convergence is always rapid. We show that this rapid convergence does not depend on a consistent discretization of the S 2 -like equations - they can be discretized independently from the S N equations. We show further that independent discretizations can offer significant advantages over consistent ones. In particular, we find that in a wide range of problems, an accurate discretization of the S 2 -like equation can be combined with a crude discretization of the S N equations to produce an accurate S N answer. We demonstrate this by analysis as well as numerical results. (orig.)
Turbulence Spreading into Linearly Stable Zone and Transport Scaling
Hahm, T.S.; Diamond, P.H.; Lin, Z.; Itoh, K.; Itoh, S.-I.
2003-01-01
We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region
Sodium flow rate measurement method of annular linear induction pumps
Araseki, Hideo; Kirillov, Igor R.; Preslitsky, Gennady V.
2012-01-01
Highlights: ► We found a new method of flow rate monitoring of electromagnetic pump. ► The method is very simple and does not require a large space. ► The method was verified with an experiment and a numerical analysis. ► The experimental data and the numerical results are in good agreement. - Abstract: The present paper proposes a method for measuring sodium flow rate of annular linear induction pumps. The feature of the method lies in measuring the leaked magnetic field with measuring coils near the stator end on the outlet side and in correlating it with the sodium flow rate. This method is verified through an experiment and a numerical analysis. The data obtained in the experiment reveals that the correlation between the leaked magnetic field and the sodium flow rate is almost linear. The result of the numerical analysis agrees with the experimental data. The present method will be particularly effective to sodium flow rate monitoring of each one of plural annular linear induction pumps arranged in parallel in a vessel which forms a large-scale pump unit.
Polarization properties of linearly polarized parabolic scaling Bessel beams
Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com
2016-10-07
The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.
On Numerical Stability in Large Scale Linear Algebraic Computations
Strakoš, Zdeněk; Liesen, J.
2005-01-01
Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005
Studying the method of linearization of exponential calibration curves
Bunzh, Z.A.
1989-01-01
The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given
A logic circuit for solving linear function by digital method
Ma Yonghe
1986-01-01
A mathematical method for determining the linear relation of physical quantity with rediation intensity is described. A logic circuit has been designed for solving linear function by digital method. Some applications and the circuit function are discussed
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Linear Polarization Properties of Parsec-Scale AGN Jets
Alexander B. Pushkarev
2017-12-01
Full Text Available We used 15 GHz multi-epoch Very Long Baseline Array (VLBA polarization sensitive observations of 484 sources within a time interval 1996–2016 from the MOJAVE program, and also from the NRAO data archive. We have analyzed the linear polarization characteristics of the compact core features and regions downstream, and their changes along and across the parsec-scale active galactic nuclei (AGN jets. We detected a significant increase of fractional polarization with distance from the radio core along the jet as well as towards the jet edges. Compared to quasars, BL Lacs have a higher degree of polarization and exhibit more stable electric vector position angles (EVPAs in their core features and a better alignment of the EVPAs with the local jet direction. The latter is accompanied by a higher degree of linear polarization, suggesting that compact bright jet features might be strong transverse shocks, which enhance magnetic field regularity by compression.
Design techniques for large scale linear measurement systems
Candy, J.V.
1979-03-01
Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented
Lovejoy, S.; del Rio Amador, L.; Hébert, R.
2015-03-01
At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare
Reconnection Scaling Experiment (RSX): Magnetic Reconnection in Linear Geometry
Intrator, T.; Sovinec, C.; Begay, D.; Wurden, G.; Furno, I.; Werley, C.; Fisher, M.; Vermare, L.; Fienup, W.
2001-10-01
The linear Reconnection Scaling Experiment (RSX) at LANL is a new experiment that can create MHD relevant plasmas to look at the physics of magnetic reconnection. This experiment can scale many relevant parameters because the guns that generate the plasma and current channels do not depend on equilibrium or force balance for startup. We describe the experiment and initial electrostatic and magnetic probe data. Two parallel current channels sweep down a long plasma column and probe data accumulated over many shots gives 3D movies of magnetic reconnection. Our first data tries to define an operating regime free from kink instabilities that might otherwise confuse the data and shot repeatability. We compare this with MHD 2 fluid NIMROD simulations of the single current channel kink stability boundary for a variety of experimental conditions.
Time Scale in Least Square Method
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Methods in half-linear asymptotic theory
Řehák, Pavel
2016-01-01
Roč. 2016, Č. 267 (2016), s. 1-27 ISSN 1072-6691 Institutional support: RVO:67985840 Keywords : half-linear differential equation * nonoscillatory solution * regular variation Subject RIV: BA - General Mathematics Impact factor: 0.954, year: 2016 http://ejde.math.txstate.edu/Volumes/2016/267/abstr.html
Methods in half-linear asymptotic theory
Pavel Rehak
2016-10-01
Full Text Available We study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation $$ (r(t|y'|^{\\alpha-1}\\hbox{sgn} y''=p(t|y|^{\\alpha-1}\\hbox{sgn} y, $$ where r(t and p(t are positive continuous functions on $[a,\\infty$, $\\alpha\\in(1,\\infty$. The aim of this article is twofold. On the one hand, we show applications of a wide variety of tools, like the Karamata theory of regular variation, the de Haan theory, the Riccati technique, comparison theorems, the reciprocity principle, a certain transformation of dependent variable, and principal solutions. On the other hand, we solve open problems posed in the literature and generalize existing results. Most of our observations are new also in the linear case.
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Offset linear scaling for H-mode confinement
Miura, Yukitoshi; Tamai, Hiroshi; Suzuki, Norio; Mori, Masahiro; Matsuda, Toshiaki; Maeda, Hikosuke; Takizuka, Tomonori; Itoh, Sanae; Itoh, Kimitaka.
1992-01-01
An offset linear scaling for the H-mode confinement time is examined based on single parameter scans on the JFT-2M experiment. Regression study is done for various devices with open divertor configuration such as JET, DIII-D, JFT-2M. The scaling law of the thermal energy is given in the MKSA unit as W th =0.0046R 1.9 I P 1.1 B T 0.91 √A+2.9x10 -8 I P 1.0 R 0.87 P√AP, where R is the major radius, I P is the plasma current, B T is the toroidal magnetic field, A is the average mass number of plasma and neutral beam particles, and P is the heating power. This fitting has a similar root mean square error (RMSE) compared to the power law scaling. The result is also compared with the H-mode in other configurations. The W th of closed divertor H-mode on ASDEX shows a little better values than that of open divertor H-mode. (author)
A simplified density matrix minimization for linear scaling self-consistent field theory
Challacombe, M.
1999-01-01
A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics
Scaling laws for e+/e- linear colliders
Delahaye, J.P.; Guignard, G.; Raubenheimer, T.; Wilson, I.
1999-01-01
Design studies of a future TeV e + e - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes very advantageous because it enables high accelerating fields to be obtained, which reduces the overall length and consequently the total cost of the linac. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Parameter Scaling in Non-Linear Microwave Tomography
Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar
2012-01-01
Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....
Dongxu Ren
2016-04-01
Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
Adaptive discontinuous Galerkin methods for non-linear reactive flows
Uzunca, Murat
2016-01-01
The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.
Convergence of hybrid methods for solving non-linear partial ...
This paper is concerned with the numerical solution and convergence analysis of non-linear partial differential equations using a hybrid method. The solution technique involves discretizing the non-linear system of PDE to obtain a corresponding non-linear system of algebraic difference equations to be solved at each time ...
A Proposed Method for Solving Fuzzy System of Linear Equations
Reza Kargar
2014-01-01
Full Text Available This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m×n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.
Mathematical methods linear algebra normed spaces distributions integration
Korevaar, Jacob
1968-01-01
Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector
Error analysis of dimensionless scaling experiments with multiple points using linear regression
Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.
2010-01-01
A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)
Small-scale quantum information processing with linear optics
Bergou, J.A.; Steinberg, A.M.; Mohseni, M.
2005-01-01
Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre
A feasible DY conjugate gradient method for linear equality constraints
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Fast linear method of illumination classification
Cooper, Ted J.; Baqai, Farhan A.
2003-01-01
We present a simple method for estimating the scene illuminant for images obtained by a Digital Still Camera (DSC). The proposed method utilizes basis vectors obtained from known memory color reflectance to identify the memory color objects in the image. Once the memory color pixels are identified, we use the ratios of the red/green and blue/green to determine the most likely illuminant in the image. The critical part of the method is to estimate the smallest set of basis vectors that closely represent the memory color reflectances. Basis vectors obtained from both Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are used. We will show that only two ICA basis vectors are needed to get an acceptable estimate.
Strong-stability-preserving additive linear multistep methods
Hadjimichael, Yiannis
2018-02-20
The analysis of strong-stability-preserving (SSP) linear multistep methods is extended to semi-discretized problems for which different terms on the right-hand side satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain larger monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding nonadditive SSP linear multistep methods.
Linear-scaling evaluation of the local energy in quantum Monte Carlo
Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester, William A. Jr.
2006-01-01
For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size
Approximate Method for Solving the Linear Fuzzy Delay Differential Equations
S. Narayanamoorthy
2015-01-01
Full Text Available We propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using Adomian decomposition method. The detailed algorithm of the approach is provided. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show this proper features of this proposed method, numerical example is illustrated.
Non-linear programming method in optimization of fast reactors
Pavelesku, M.; Dumitresku, Kh.; Adam, S.
1975-01-01
Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)
Ravi Kanth, A.S.V.; Aruna, K.
2009-01-01
In this paper, we propose a reliable algorithm to develop exact and approximate solutions for the linear and nonlinear Schroedinger equations. The approach rest mainly on two-dimensional differential transform method which is one of the approximate methods. The method can easily be applied to many linear and nonlinear problems and is capable of reducing the size of computational work. Exact solutions can also be achieved by the known forms of the series solutions. Several illustrative examples are given to demonstrate the effectiveness of the present method.
Application of the simplex method of linear programming model to ...
This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...
Grey scale, the 'crispening effect', and perceptual linearization
Belaïd, N.; Martens, J.B.
1998-01-01
One way of optimizing a display is to maximize the number of distinguishable grey levels, which in turn is equivalent to perceptually linearizing the display. Perceptual linearization implies that equal steps in grey value evoke equal steps in brightness sensation. The key to perceptual
Strong-stability-preserving additive linear multistep methods
Hadjimichael, Yiannis; Ketcheson, David I.
2018-01-01
The analysis of strong-stability-preserving (SSP) linear multistep methods is extended to semi-discretized problems for which different terms on the right-hand side satisfy different forward Euler (or circle) conditions. Optimal perturbed
Direct Linear Transformation Method for Three-Dimensional Cinematography
Shapiro, Robert
1978-01-01
The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
Linear regression methods a ccording to objective functions
Yasemin Sisman; Sebahattin Bektas
2012-01-01
The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.
A method for evaluating dynamical friction in linear ball bearings.
Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P; Mitatha, Somsak
2010-01-01
A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer.
Hybrid Method for Solving Inventory Problems with a Linear ...
Osagiede and Omosigho (2004) proposed a direct search method for identifying the number of replenishment when the demand pattern is linearly increasing. The main computational task in this direct search method was associated with finding the optimal number of replenishments. To accelerate the use of this method, the ...
Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young
2017-05-01
This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.
Scaling linear colliders to 5 TeV and above
Wilson, P.B.
1997-04-01
Detailed designs exist at present for linear colliders in the 0.5-1.0 TeV center-of-mass energy range. For linear colliders driven by discrete rf sources (klystrons), the rf operating frequencies range from 1.3 GHz to 14 GHz, and the unloaded accelerating gradients from 21 MV/m to 100 MV/m. Except for the collider design at 1.3 GHz (TESLA) which uses superconducting accelerating structures, the accelerating gradients vary roughly linearly with the rf frequency. This correlation between gradient and frequency follows from the necessity to keep the ac open-quotes wall plugclose quotes power within reasonable bounds. For linear colliders at energies of 5 TeV and above, even higher accelerating gradients and rf operating frequencies will be required if both the total machine length and ac power are to be kept within reasonable limits. An rf system for a 5 TeV collider operating at 34 GHz is outlined, and it is shown that there are reasonable candidates for microwave tube sources which, together with rf pulse compression, are capable of supplying the required rf power. Some possibilities for a 15 TeV collider at 91 GHz are briefly discussed
Runge-Kutta Methods for Linear Ordinary Differential Equations
Zingg, David W.; Chisholm, Todd T.
1997-01-01
Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.
Generalization of the linear algebraic method to three dimensions
Lynch, D.L.; Schneider, B.I.
1991-01-01
We present a numerical method for the solution of the Lippmann-Schwinger equation for electron-molecule collisions. By performing a three-dimensional numerical quadrature, this approach avoids both a basis-set representation of the wave function and a partial-wave expansion of the scattering potential. The resulting linear equations, analogous in form to the one-dimensional linear algebraic method, are solved with the direct iteration-variation method. Several numerical examples are presented. The prospect for using this numerical quadrature scheme for electron-polyatomic molecules is discussed
Linear-scaling implementation of the direct random-phase approximation
Kállay, Mihály
2015-01-01
We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor
Comments on new iterative methods for solving linear systems
Wang Ke
2017-06-01
Full Text Available Some new iterative methods were presented by Du, Zheng and Wang for solving linear systems in [3], where it is shown that the new methods, comparing to the classical Jacobi or Gauss-Seidel method, can be applied to more systems and have faster convergence. This note shows that their methods are suitable for more matrices than positive matrices which the authors suggested through further analysis and numerical examples.
New Implicit General Linear Method | Ibrahim | Journal of the ...
A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss
On the non-linear scale of cosmological perturbation theory
Blas, Diego; Konstandin, Thomas
2013-01-01
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
On the non-linear scale of cosmological perturbation theory
Blas, Diego; Garny, Mathias; Konstandin, Thomas
2013-04-01
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
On the non-linear scale of cosmological perturbation theory
Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-04-15
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
Scaling as an Organizational Method
Papazu, Irina Maria Clara Hansen; Nelund, Mette
2018-01-01
Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....
Scale-dependent three-dimensional charged black holes in linear and non-linear electrodynamics
Rincon, Angel; Koch, Benjamin [Pontificia Universidad Catolica de Chile, Instituto de Fisica, Santiago (Chile); Contreras, Ernesto; Bargueno, Pedro; Hernandez-Arboleda, Alejandro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia); Panotopoulos, Grigorios [Universidade de Lisboa, CENTRA, Instituto Superior Tecnico, Lisboa (Portugal)
2017-07-15
In the present work we study the scale dependence at the level of the effective action of charged black holes in Einstein-Maxwell as well as in Einstein-power-Maxwell theories in (2 + 1)-dimensional spacetimes without a cosmological constant. We allow for scale dependence of the gravitational and electromagnetic couplings, and we solve the corresponding generalized field equations imposing the null energy condition. Certain properties, such as horizon structure and thermodynamics, are discussed in detail. (orig.)
EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.
Lian, Yao; Ge, Meng; Pan, Xian-Ming
2014-12-19
B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2014-01-01
In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...
Non-linear variability in geophysics scaling and fractals
Lovejoy, S
1991-01-01
consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...
On a linear method in bootstrap confidence intervals
Andrea Pallini
2007-10-01
Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.
Lattice Boltzmann methods for global linear instability analysis
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
The Embedding Method for Linear Partial Differential Equations
The recently suggested embedding method to solve linear boundary value problems is here extended to cover situations where the domain of interest is unbounded or multiply connected. The extensions involve the use of complete sets of exterior and interior eigenfunctions on canonical domains. Applications to typical ...
Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems
Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav
2014-01-01
Roč. 36, č. 4 (2014), A2002-A2022 ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014
A General Linear Method for Equating with Small Samples
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Darula, Radoslav; Sorokin, Sergey
2013-01-01
An electro-magneto-mechanical system combines three physical domains - a mechanical structure, a magnetic field and an electric circuit. The interaction between these domains is analysed for a structure with two degrees of freedom (translational and rotational) and two electrical circuits. Each...... electrical circuit is described by a differential equation of the 1st order, which is considered to contribute to the coupled system by 0.5 DOF. The electrical and mechanical systems are coupled via a magnetic circuit, which is inherently non-linear, due to a non-linear nature of the electro-magnetic force...
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Linear density response function in the projector augmented wave method
Yan, Jun; Mortensen, Jens Jørgen; Jacobsen, Karsten Wedel
2011-01-01
We present an implementation of the linear density response function within the projector-augmented wave method with applications to the linear optical and dielectric properties of both solids, surfaces, and interfaces. The response function is represented in plane waves while the single...... functions of Si, C, SiC, AlP, and GaAs compare well with previous calculations. While optical properties of semiconductors, in particular excitonic effects, are generally not well described by ALDA, we obtain excellent agreement with experiments for the surface loss function of graphene and the Mg(0001...
An extended GS method for dense linear systems
Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi
2009-09-01
Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.
Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)
2015-11-14
Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.
Solution methods for large systems of linear equations in BACCHUS
Homann, C.; Dorr, B.
1993-05-01
The computer programme BACCHUS is used to describe steady state and transient thermal-hydraulic behaviour of a coolant in a fuel element with intact geometry in a fast breeder reactor. In such computer programmes generally large systems of linear equations with sparse matrices of coefficients, resulting from discretization of coolant conservation equations, must be solved thousands of times giving rise to large demands of main storage and CPU time. Direct and iterative solution methods of the systems of linear equations, available in BACCHUS, are described, giving theoretical details and experience with their use in the programme. Besides use of a method of lines, a Runge-Kutta-method, for solution of the partial differential equation is outlined. (orig.) [de
Linear finite element method for one-dimensional diffusion problems
Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica
2011-07-01
We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)
Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization
Rahman, M. A.; Basarudin, T.
1997-01-01
This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine
Galerkin projection methods for solving multiple related linear systems
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.
1989-01-01
Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Optimal overlapping of waveform relaxation method for linear differential equations
Yamada, Susumu; Ozawa, Kazufumi
2000-01-01
Waveform relaxation (WR) method is extremely suitable for solving large systems of ordinary differential equations (ODEs) on parallel computers, but the convergence of the method is generally slow. In order to accelerate the convergence, the methods which decouple the system into many subsystems with overlaps some of the components between the adjacent subsystems have been proposed. The methods, in general, converge much faster than the ones without overlapping, but the computational cost per iteration becomes larger due to the increase of the dimension of each subsystem. In this research, the convergence of the WR method for solving constant coefficients linear ODEs is investigated and the strategy to determine the number of overlapped components which minimizes the cost of the parallel computations is proposed. Numerical experiments on an SR2201 parallel computer show that the estimated number of the overlapped components by the proposed strategy is reasonable. (author)
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
Exact solution of some linear matrix equations using algebraic methods
Djaferis, T. E.; Mitter, S. K.
1977-01-01
A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Dual-scale Galerkin methods for Darcy flow
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
Eko Rudi Iswanto; Eric Yee
2016-01-01
Within the framework of identifying NPP sites, site surveys are performed in West Bangka (WB), Bangka-Belitung Island Province. Ground response analysis of a potential site has been carried out using peak strain profiles and peak ground acceleration. The objective of this research is to compare Equivalent Linear (EQL) and Non Linear (NL) methods of ground response analysis on the selected NPP site (West Bangka) using Deep Soil software. Equivalent linear method is widely used because requires soil data in simple way and short time of computational process. On the other hand, non linear method is capable of representing the actual soil behaviour by considering non linear soil parameter. The results showed that EQL method has similar trends to NL method. At surface layer, the acceleration values for EQL and NL methods are resulted as 0.425 g and 0.375 g respectively. NL method is more reliable in capturing higher frequencies of spectral acceleration compared to EQL method. (author)
Linear augmented plane wave method for self-consistent calculations
Takeda, T.; Kuebler, J.
1979-01-01
O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)
2013-01-01
This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.
Multiple time scale methods in tokamak magnetohydrodynamics
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
M. ZANGIABADI; H. R. MALEKI
2007-01-01
In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...
An improved partial bundle method for linearly constrained minimax problems
Chunming Tang
2016-02-01
Full Text Available In this paper, we propose an improved partial bundle method for solving linearly constrained minimax problems. In order to reduce the number of component function evaluations, we utilize a partial cutting-planes model to substitute for the traditional one. At each iteration, only one quadratic programming subproblem needs to be solved to obtain a new trial point. An improved descent test criterion is introduced to simplify the algorithm. The method produces a sequence of feasible trial points, and ensures that the objective function is monotonically decreasing on the sequence of stability centers. Global convergence of the algorithm is established. Moreover, we utilize the subgradient aggregation strategy to control the size of the bundle and therefore overcome the difficulty of computation and storage. Finally, some preliminary numerical results show that the proposed method is effective.
Electrostatic Discharge Current Linear Approach and Circuit Design Method
Pavlos K. Katsivelis
2010-11-01
Full Text Available The Electrostatic Discharge phenomenon is a great threat to all electronic devices and ICs. An electric charge passing rapidly from a charged body to another can seriously harm the last one. However, there is a lack in a linear mathematical approach which will make it possible to design a circuit capable of producing such a sophisticated current waveform. The commonly accepted Electrostatic Discharge current waveform is the one set by the IEC 61000-4-2. However, the over-simplified circuit included in the same standard is incapable of producing such a waveform. Treating the Electrostatic Discharge current waveform of the IEC 61000-4-2 as reference, an approximation method, based on Prony’s method, is developed and applied in order to obtain a linear system’s response. Considering a known input, a method to design a circuit, able to generate this ESD current waveform in presented. The circuit synthesis assumes ideal active elements. A simulation is carried out using the PSpice software.
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)
2013-08-28
We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.
Alternating direction transport sweeps for linear discontinuous SN method
Yavuz, M.; Aykanat, C.
1993-01-01
The performance of Alternating Direction Transport Sweep (ADTS) method is investigated for spatially differenced Linear Discontinuous S N (LD-S N ) problems on a MIMD multicomputer, Intel IPSC/2. The method consists of dividing a transport problem spatially into sub-problems, assigning each sub-problem to a separate processor. Then, the problem is solved by performing transport sweeps iterating on the scattering source and interface fluxes between the sub-problems. In each processor, the order of transport sweeps is scheduled such that a processor completing its computation in a quadrant of a transport sweep is able to use the most recent information (exiting fluxes of neighboring processor) as its incoming fluxes to start the next quadrant calculation. Implementation of this method on the Intel IPSC/2 multicomputer displays significant speedups over the one-processor method. Also, the performance of the method is compared with those reported previously for the Diamond Differenced S N (DD-S N ) method. Our experimental experience illustrates that the parallel performance of both the ADTS LD- and DD-S N methods is the same. (orig.)
A national-scale model of linear features improves predictions of farmland biodiversity.
Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H
2017-12-01
Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
Deep Learning Methods for Improved Decoding of Linear Codes
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Linear source approximation scheme for method of characteristics
Tang Chuntao
2011-01-01
Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
A New Class of Scaling Correction Methods
Mei Li-Jie; Wu Xin; Liu Fu-Yao
2012-01-01
When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)
Non linear permanent magnets modelling with the finite element method
Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.
1989-01-01
In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter
Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M
2016-01-01
Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)
Linear Strength Vortex Panel Method for NACA 4412 Airfoil
Liu, Han
2018-03-01
The objective of this article is to formulate numerical models for two-dimensional potential flow over the NACA 4412 Airfoil using linear vortex panel methods. By satisfying the no penetration boundary condition and Kutta condition, the circulation density on each boundary points (end point of every panel) are obtained and according to which, surface pressure distribution and lift coefficients of the airfoil are predicted and validated by Xfoil, an interactive program for the design and analysis of airfoil. The sensitivity of results to the number of panels is also investigated in the end, which shows that the results are sensitive to the number of panels when panel number ranges from 10 to 160. With the increasing panel number (N>160), the results become relatively insensitive to it.
Continuum Level Density in Complex Scaling Method
Suzuki, R.; Myo, T.; Kato, K.
2005-01-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Gene Golub; Kwok Ko
2009-01-01
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Hardy inequality on time scales and its application to half-linear dynamic equations
Řehák Pavel
2005-01-01
Full Text Available A time-scale version of the Hardy inequality is presented, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting. An application in the oscillation theory of half-linear dynamic equations is given.
A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate
Min Sun
2014-01-01
Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.
Method for validating radiobiological samples using a linear accelerator
Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic
2014-01-01
There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools. (orig.)
Method for validating radiobiological samples using a linear accelerator.
Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic
2014-04-29
There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools.
Linear arrangement of nano-scale magnetic particles formed in Cu-Fe-Ni alloys
Kang, Sung, E-mail: k3201s@hotmail.co [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeda, Mahoto [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeguchi, Masaki [Advanced Electron Microscopy Group, National Institute for Materials Science (NIMS), Sakura 3-13, Tsukuba, 305-0047 (Japan); Bae, Dong-Sik [School of Nano and Advanced Materials Engineering, Changwon National University, Gyeongnam, 641-773 (Korea, Republic of)
2010-04-30
The structural evolution of nano-scale magnetic particles formed in Cu-Fe-Ni alloys on isothermal annealing at 878 K has been investigated by means of transmission electron microscopy (TEM), electron dispersive X-ray spectroscopy (EDS), electron energy-loss spectroscopy (EELS) and field-emission scanning electron microscopy (FE-SEM). Phase decomposition of Cu-Fe-Ni occurred after an as-quenched specimen received a short anneal, and nano-scale magnetic particles were formed randomly in the Cu-rich matrix. A striking feature that two or more nano-scale particles with a cubic shape were aligned linearly along <1,0,0> directions was observed, and the trend was more pronounced at later stages of the precipitation. Large numbers of <1,0,0> linear chains of precipitates extended in three dimensions in late stages of annealing.
Lithographic linear motor, lithographic apparatus, and device manufacturing method
2006-01-01
A linear motor having a high driving force, high efficiency and low normal force comprises two opposed magnet tracks and an armature comprising three open coil sets. The linear motor may be used to drive a stage, such as, for example, a mask or wafer stage, in a lithographic apparatus.
Mathematical and Numerical Methods for Non-linear Beam Dynamics
Herr, W
2014-01-01
Non-linear effects in accelerator physics are important for both successful operation of accelerators and during the design stage. Since both of these aspects are closely related, they will be treated together in this overview. Some of the most important aspects are well described by methods established in other areas of physics and mathematics. The treatment will be focused on the problems in accelerators used for particle physics experiments. Although the main emphasis will be on accelerator physics issues, some of the aspects of more general interest will be discussed. In particular, we demonstrate that in recent years a framework has been built to handle the complex problems in a consistent form, technically superior and conceptually simpler than the traditional techniques. The need to understand the stability of particle beams has substantially contributed to the development of new techniques and is an important source of examples which can be verified experimentally. Unfortunately, the documentation of these developments is often poor or even unpublished, in many cases only available as lectures or conference proceedings
Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain
2016-10-01
Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system. Copyright © 2016 Elsevier B.V. All rights reserved.
Three-point phase correlations: A new measure of non-linear large-scale structure
Wolstenhulme, Richard; Obreschkow, Danail
2015-01-01
We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...
On the interaction of small-scale linear waves with nonlinear solitary waves
Xu, Chengzhu; Stastna, Marek
2017-04-01
In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow
Linearized versus non-linear inverse methods for seismic localization of underground sources
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...
Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations
Matt Challacombe
2014-03-01
Full Text Available A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B. Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3 carbon nanotube segment.
Ho, Yuh-Shan
2006-01-01
A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.
Extension of the linear nodal method to large concrete building calculations
Childs, R.L.; Rhoades, W.A.
1985-01-01
The implementation of the linear nodal method in the TORT code is described, and the results of a mesh refinement study to test the effectiveness of the linear nodal and weighted diamond difference methods available in TORT are presented
Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance
Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.
1999-01-01
Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient
Method and apparatus of highly linear optical modulation
DeRose, Christopher; Watts, Michael R.
2016-05-03
In a new optical intensity modulator, a nonlinear change in refractive index is used to balance the nonlinearities in the optical transfer function in a way that leads to highly linear optical intensity modulation.
Jovanović Jelena
2016-02-01
Full Text Available A cost-effective method for resolution increase of a two-stage piecewise linear analog-to-digital converter used for sensor linearization is proposed in this paper. In both conversion stages flash analog-to-digital converters are employed. Resolution increase by one bit per conversion stage is performed by introducing one additional comparator in front of each of two flash analog-to-digital converters, while the converters’ resolutions remain the same. As a result, the number of employed comparators, as well as the circuit complexity and the power consumption originating from employed comparators are for almost 50 % lower in comparison to the same parameters referring to the linearization circuit of the conventional design and of the same resolution. Since the number of employed comparators is significantly reduced according to the proposed method, special modifications of the linearization circuit are needed in order to properly adjust reference voltages of employed comparators.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.
2017-01-01
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.
Self-consistent field theory based molecular dynamics with linear system-size scaling
Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)
2014-04-07
We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.
Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian
2015-01-01
The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...
ONETEP: linear-scaling density-functional theory with plane-waves
Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C
2006-01-01
This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep
Algebraic coarsening methods for linear and nonlinear PDE and systems
McWilliams, J C
2000-01-01
In [l] Brandt describes a general approach for algebraic coarsening. Given fine-grid equations and a prescribed relaxation method, an approach is presented for defining both the coarse-grid variables and the coarse-grid equations corresponding to these variables. Although, these two tasks are not necessarily related (and, indeed, are often performed independently and with distinct techniques) in the approaches of [1] both revolve around the same underlying observation. To determine whether a given set of coarse-grid variables is appropriate it is suggested that one should employ compatible relaxation. This is a generalization of so-called F-relaxation (e.g., [2]). Suppose that the coarse-grid variables are defined as a subset of the fine-grid variables. Then, F-relaxation simply means relaxing only the F-variables (i.e., fine-grid variables that do not correspond to coarse-grid variables), while leaving the remaining fine-grid variables (C-variables) unchanged. The generalization of compatible relaxation is in allowing the coarse-grid variables to be defined differently, say as linear combinations of fine-grid variables, or even nondeterministically (see examples in [1]). For the present summary it suffices to consider the simple case. The central observation regarding the set of coarse-grid variables is the following [1]: Observation 1--A general measure for the quality of the set of coarse-grid variables is the convergence rate of compatible relaxation. The conclusion is that a necessary condition for efficient multigrid solution (e.g., with convergence rates independent of problem size) is that the compatible-relaxation convergence be bounded away from 1, independently of the number of variables. This is often a sufficient condition, provided that the coarse-grid equations are sufficiently accurate. Therefore, it is suggested in [1] that the convergence rate of compatible relaxation should be used as a criterion for choosing and evaluating the set of coarse
Level density in the complex scaling method
Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki
2005-01-01
It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)
Methods for Large-Scale Nonlinear Optimization.
1980-05-01
STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library
Sorokin, Vladislav; Thomsen, Jon Juel
2015-01-01
Parametrically excited systems appear in many fields of science and technology, intrinsically or imposed purposefully; e.g. spatially periodic structures represent an important class of such systems [4]. When the parametric excitation can be considered weak, classical asymptotic methods like...... the method of averaging [2] or multiple scales [6] can be applied. However, with many practically important applications this simplification is inadequate, e.g. with spatially periodic structures it restricts the possibility to affect their effective dynamic properties by a structural parameter modulation...... of considerable magnitude. Approximate methods based on Floquet theory [4] for analyzing problems involving parametric excitation, e.g. the classical Hill’s method of infinite determinants [3,4], can be employed also in cases of strong excitation; however, with Floquet theory being applicable only for linear...
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
An implementation analysis of the linear discontinuous finite element method
Becker, T. L.
2013-01-01
This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any
An implementation analysis of the linear discontinuous finite element method
Becker, T. L. [Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)
2013-07-01
This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory
Proposal of Realization Restricted Quantum Game with Linear Optic Method
Zhao Haijun; Fang Ximing
2006-01-01
We present a quantum game with the restricted strategic space and its realization with linear optical system, which can be played by two players who are separated remotely. This game can also be realized on any other quantum computers. We find that the constraint brings some interesting properties that are useful for making game models.
1977-12-01
The M.C.H./M.E.N.T.3 document is concerned with sub-assemblies intended for measuring on a linear scale the neutron fluence rate or radiation dose rate when connected with nuclear detectors working in current. The symbols used are described. Some definitions and a bibliography are given. The main characteristics of direct current linear measurement sub-assemblies are then described together with corresponding test methods. This type of instrument indicates on a linear scale the level of a direct current applied to its input. The document reviews linear sub-assemblies for general purpose applications, difference amplifiers for monitoring, and averaging amplifiers. The document is intended for electronics manufacturers, designers, persons participating in acceptance trials and plant operators [fr
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech
劉, 麗清
2015-01-01
Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...
A general method for enclosing solutions of interval linear equations
Rohn, Jiří
2012-01-01
Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
Chen Qi
2013-07-01
Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.
Second derivative continuous linear multistep methods for the ...
step methods (LMM), with properties that embed the characteristics of LMM and hybrid methods. This paper gives a continuous reformulation of the Enright [5] second derivative methods. The motivation lies in the fact that the new formulation ...
Rapakoulia, Trisevgeni
2017-08-09
Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Leapfrog variants of iterative methods for linear algebra equations
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
Linear and Nonlinear Optical Properties of Micrometer-Scale Gold Nanoplates
Liu Xiao-Lan; Peng Xiao-Niu; Yang Zhong-Jian; Li Min; Zhou Li
2011-01-01
Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process. The morphology, crystal structure and linear optical extinction of the gold nanoplates have been characterized. These gold nanoplates are single-crystalline with triangular, truncated triangular and hexagonal shapes, exhibiting strong surface plasmon resonance (SPR) extinction in the visible and near-infrared (NIR) region. The linear optical properties of gold nanoplates are also investigated by theoretical calculations. We further investigate the nonlinear optical properties of the gold nanoplates in solution by Z-scan technique. The nonlinear absorption (NLA) coefficient and nonlinear refraction (NLR) index are measured to be 1.18×10 2 cm/GW and −1.04×10 −3 cm 2 /GW, respectively. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
BOX-COX REGRESSION METHOD IN TIME SCALING
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
Non-linear optics of nano-scale pentacene thin film
Yahia, I. S.; Alfaify, S.; Jilani, Asim; Abdel-wahab, M. Sh.; Al-Ghamdi, Attieh A.; Abutalib, M. M.; Al-Bassam, A.; El-Naggar, A. M.
2016-07-01
We have found the new ways to investigate the linear/non-linear optical properties of nanostructure pentacene thin film deposited by thermal evaporation technique. Pentacene is the key material in organic semiconductor technology. The existence of nano-structured thin film was confirmed by atomic force microscopy and X-ray diffraction. The wavelength-dependent transmittance and reflectance were calculated to observe the optical behavior of the pentacene thin film. It has been observed the anomalous dispersion at wavelength λ 800. The non-linear refractive index of the deposited films was investigated. The linear optical susceptibility of pentacene thin film was calculated, and we observed the non-linear optical susceptibility of pentacene thin film at about 6 × 10-13 esu. The advantage of this work is to use of spectroscopic method to calculate the liner and non-liner optical response of pentacene thin films rather than expensive Z-scan. The calculated optical behavior of the pentacene thin films could be used in the organic thin films base advanced optoelectronic devices such as telecommunications devices.
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution
Wang, L.; Cardenas, M. B.
2017-12-01
Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self
A critical oscillation constant as a variable of time scales for half-linear dynamic equations
Řehák, Pavel
2010-01-01
Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7
Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems
Elden, Lars; Hansen, Per Christian; Rojas, Marielba
2003-01-01
The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...
One testing method of dynamic linearity of an accelerometer
Lei Jing-Yu
2015-01-01
Full Text Available To effectively test dynamic linearity of an accelerometer over a wide rang of 104 g to about 20 × 104g, one published patent technology is first experimentally verified and analysed, and its deficient is presented, then based on stress wave propagation theory on the thin long bar, the relation between the strain signal and the corresponding acceleration signal is obtained, one special link of two coaxial projectile is developed. These two coaxial metal cylinders (inner cylinder and circular tube are used as projectiles, to prevent their mutual slip inside the gun barrel during movement, the one end of two projectiles is always fastened by small screws. Ti6-AL4-V bar with diameter of 30 mm is used to propagate loading stress pulse. The resultant compression wave can be measured by the strain gauges on the bar, and a half –sine strain pulse is obtained. The measuring accelerometer is attached on the other end of the bar by a vacuum clamp. In this clamp, the accelerometer only bear compression wave, the reflected tension pulse make the accelerometer off the bar. Using this system, dynamic linearity measurement of accelerometer can be easily tested in wider range of acceleration values. And a really measuring results are presented.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
A simple finite element method for linear hyperbolic problems
Mu, Lin; Ye, Xiu
2017-01-01
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
Xiaocui Wu
2015-02-01
Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.
Machine learning-based methods for prediction of linear B-cell epitopes.
Wang, Hsin-Wei; Pai, Tun-Wen
2014-01-01
B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.
Cosmological large-scale structures beyond linear theory in modified gravity
Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)
2011-06-01
We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.
Scaling versus asymptotic scaling in the non-linear σ-model in 2D. Continuum version
Flyvbjerg, H.
1990-01-01
The two-point function of the O(N)-symmetric non-linear σ-model in two dimensions is large-N expanded and renormalized, neglecting terms of O(1/N 2 ). At finite cut-off, universal, analytical expressions relate the magnetic susceptibility and the dressed mass to the bare coupling. Removing the cut-off, a similar relation gives the renormalized coupling as a function of the mass gap. In the weak-coupling limit these relations reproduce the results of renormalization group improved weak-coupling perturbation theory to two-loop order. The constant left unknown, when the renormalization group is integrated, is determined here. The approach to asymptotic scaling is studied for various values of N. (orig.)
Linear and kernel methods for multi- and hypervariate change detection
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Linear and kernel methods for multivariate change detection
Canty, Morton J.; Nielsen, Allan Aasbjerg
2012-01-01
), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...
Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca
1999-01-01
, full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...
On some properties of the block linear multi-step methods | Chollom ...
The convergence, stability and order of Block linear Multistep methods have been determined in the past based on individual members of the block. In this paper, methods are proposed to examine the properties of the entire block. Some Block Linear Multistep methods have been considered, their convergence, stability and ...
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Krylov subspace methods for solving large unsymmetric linear systems
Saad, Y.
1981-01-01
Some algorithms based upon a projection process onto the Krylov subspace K/sub m/ = Span(r 0 , Ar 0 ,...,A/sup m/-1r 0 ) are developed, generalizing the method of conjugate gradients to unsymmetric systems. These methods are extensions of Arnoldi's algorithm for solving eigenvalue problems. The convergence is analyzed in terms of the distance of the solution to the subspace K/sub m/ and some error bounds are established showing, in particular, a similarity with the conjugate gradient method (for symmetric matrices) when the eigenvalues are real. Several numerical experiments are described and discussed
Large-scale dynamo action due to α fluctuations in a linear shear flow
Sridhar, S.; Singh, Nishant K.
2014-12-01
We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors
Trong-Hieu Tran
2016-09-01
Full Text Available This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs and analog-to-digital converters (ADCs. The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC 0.35-micron complementary metal oxide semiconductor (CMOS technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL is −0.79–0.95 LSB while the differential non-linearity (DNL is −0.68–0.72 LSB. The effective number of bits (ENOB of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement
Interpolation from Grid Lines: Linear, Transfinite and Weighted Method
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2017-01-01
When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
Construction of extended exponential general linear methods 524 ...
This paper introduces a new approach for constructing higher order of EEGLM which have become very popular and novel due to its enviable stability properties. This paper also shows that methods 524 is stable with its characteristics root lies in a unit circle. Numerical experiments indicate that Extended Exponential ...
A fast method for linear waves based on geometrical optics
Stolk, C.C.
2009-01-01
We develop a fast method for solving the one-dimensional wave equation based on geometrical optics. From geometrical optics (e.g., Fourier integral operator theory or WKB approximation) it is known that high-frequency waves split into forward and backward propagating parts, each propagating with the
Numerical method for solving linear Fredholm fuzzy integral equations of the second kind
Abbasbandy, S. [Department of Mathematics, Imam Khomeini International University, P.O. Box 288, Ghazvin 34194 (Iran, Islamic Republic of)]. E-mail: saeid@abbasbandy.com; Babolian, E. [Faculty of Mathematical Sciences and Computer Engineering, Teacher Training University, Tehran 15618 (Iran, Islamic Republic of); Alavi, M. [Department of Mathematics, Arak Branch, Islamic Azad University, Arak 38135 (Iran, Islamic Republic of)
2007-01-15
In this paper we use parametric form of fuzzy number and convert a linear fuzzy Fredholm integral equation to two linear system of integral equation of the second kind in crisp case. We can use one of the numerical method such as Nystrom and find the approximation solution of the system and hence obtain an approximation for fuzzy solution of the linear fuzzy Fredholm integral equations of the second kind. The proposed method is illustrated by solving some numerical examples.
The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.
Narayanamoorthy, S; Kalyani, S
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
S. Narayanamoorthy
2015-01-01
Full Text Available An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
Linear electron accelerator body and method of its manufacture
Landa, V.; Maresova, V.; Lucek, J.; Prusa, F.
1988-01-01
The accelerator body consists of a hollow casing made of a high electric conductivity metal. The inside is partitioned with a system of resonators. The resonator body is made of one piece of the same metal as the casing or a related one (e.g., copper -coper, silver-copper, copper-copper alloy). The accelerator body is manufactured using the cathodic process on the periphery of a system of metal partitions and negative models of resonator cavities fitted to a metal pin. The pin is then removed from the system and the soluble models of the cavities are dissolved in a solvent. The advantage of the design and the method of manufacture is that the result is a compact, perfectly tight body with a perfectly lustre surface. The casing wall can be very thin, which improves accelerator performance. The claimed method can also be used in manufacturing miniature accelerators. (E.J.). 1 fig
Non-linear methods for the quantification of cyclic motion
Quintana Duque, Juan Carlos
2016-01-01
Traditional methods of human motion analysis assume that fluctuations in cycles (e.g. gait motion) and repetitions (e.g. tennis shots) arise solely from noise. However, the fluctuations may have enough information to describe the properties of motion. Recently, the fluctuations in motion have been analysed based on the concepts of variability and stability, but they are not used uniformly. On the one hand, these concepts are often mixed in the existing literature, while on the other hand, the...
Linear facility location in three dimensions - Models and solution methods
Brimberg, Jack; Juel, Henrik; Schöbel, Anita
2002-01-01
We consider the problem of locating a line or a line segment in three-dimensional space, such that the sum of distances from the facility represented by the line (segment) to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through...... horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and efficient solution methods are given....
Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F
2016-08-01
Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.
Fundamental solution of the problem of linear programming and method of its determination
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
Linear feature selection in texture analysis - A PLS based method
Marques, Joselene; Igel, Christian; Lillholm, Martin
2013-01-01
We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....
Linear velocity fields in non-Gaussian models for large-scale structure
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Love, J.C.; Demas, J.N.
1983-01-01
The Foerster equation describes excited-state decay curves involving resonance intermolecular energy transfer. A linearized solution based on the phase-plane method has been developed. The new method is quick, insensitive to the fitting region, accurate, and precise
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n
2016-01-01
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order
Mohammad Almousa
2013-01-01
Full Text Available The aim of this study is to present the use of a semi analytical method called the optimal homotopy asymptotic method (OHAM for solving the linear Fredholm integral equations of the first kind. Three examples are discussed to show the ability of the method to solve the linear Fredholm integral equations of the first kind. The results indicated that the method is very effective and simple.
Mejlbro, Leif
1997-01-01
An alternative formula for the solution of linear differential equations of order n is suggested. When applicable, the suggested method requires fewer and simpler computations than the well-known method using Wronskians.......An alternative formula for the solution of linear differential equations of order n is suggested. When applicable, the suggested method requires fewer and simpler computations than the well-known method using Wronskians....
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Liu Linqin
1991-01-01
The separation-combination method a new kind of analysis method of linear structures in remote sensing image interpretation is introduced taking northwestern Fujian as the example, its practical application is examined. The practice shows that application results not only reflect intensities of linear structures in overall directions at different locations, but also contribute to the zonation of linear structures and display their space distribution laws. Based on analyses of linear structures, it can provide more information concerning remote sensing on studies of regional mineralization laws and the guide to ore-finding combining with mineralization
Kandel, Tanka P; Lærke, Poul Erik; Elsgaard, Lars
2016-01-01
One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre-deployment flu......One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre...... was placed on fixed collars, and CO2 concentration in the chamber headspace were recorded at 1-s intervals for 45 min. Fluxes were measured in different soil types (sandy, sandy loam and organic soils), and for various manipulations (tillage, rain and drought) and soil conditions (temperature and moisture......) to obtain a range of fluxes with different shapes of flux curves. The linear method provided more stable flux results during short enclosure times (few min) but underestimated initial fluxes by 15–300% after 45 min deployment time. Non-linear models reduced the underestimation as average underestimation...
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Solution of systems of linear algebraic equations by the method of summation of divergent series
Kirichenko, G.A.; Korovin, Ya.S.; Khisamutdinov, M.V.; Shmojlov, V.I.
2015-01-01
A method for solving systems of linear algebraic equations has been proposed on the basis on the summation of the corresponding continued fractions. The proposed algorithm for solving systems of linear algebraic equations is classified as direct algorithms providing an exact solution in a finite number of operations. Examples of solving systems of linear algebraic equations have been presented and the effectiveness of the algorithm has been estimated [ru
Yin haihua; Yao Zhigang
2014-01-01
This article describes the environmental impact assessment methods of the radiation generated by the runing. medical linear accelerator. The material and thickness of shielding wall and protective doors of the linear accelerator were already knew, therefore we can evaluate the radiation by the runing. medical linear accelerator whether or not in the normal range of national standard by calculating the annual effective radiation dose of the surrounding personnel suffered. (authors)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Chosen interval methods for solving linear interval systems with special type of matrix
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
A Linear Birefringence Measurement Method for an Optical Fiber Current Sensor.
Xu, Shaoyi; Shao, Haiming; Li, Chuansheng; Xing, Fangfang; Wang, Yuqiao; Li, Wei
2017-07-03
In this work, a linear birefringence measurement method is proposed for an optical fiber current sensor (OFCS). First, the optical configuration of the measurement system is presented. Then, the elimination method of the effect of the azimuth angles between the sensing fiber and the two polarizers is demonstrated. Moreover, the relationship of the linear birefringence, the Faraday rotation angle and the final output is determined. On these bases, the multi-valued problem on the linear birefringence is simulated and its solution is illustrated when the linear birefringence is unknown. Finally, the experiments are conducted to prove the feasibility of the proposed method. When the numbers of turns of the sensing fiber in the OFCS are about 15, 19, 23, 27, 31, 35, and 39, the measured linear birefringence obtained by the proposed method are about 1.3577, 1.8425, 2.0983, 2.5914, 2.7891, 3.2003 and 3.5198 rad. Two typical methods provide the references for the proposed method. The proposed method is proven to be suitable for the linear birefringence measurement in the full range without the limitation that the linear birefringence must be smaller than π/2.
Refat Aljumily
2015-09-01
Full Text Available A few literary scholars have long claimed that Shakespeare did not write some of his best plays (history plays and tragedies and proposed at one time or another various suspect authorship candidates. Most modern-day scholars of Shakespeare have rejected this claim, arguing that strong evidence that Shakespeare wrote the plays and poems being his name appears on them as the author. This has caused and led to an ongoing scholarly academic debate for quite some long time. Stylometry is a fast-growing field often used to attribute authorship to anonymous or disputed texts. Stylometric attempts to resolve this literary puzzle have raised interesting questions over the past few years. The following paper contributes to “the Shakespeare authorship question” by using a mathematically-based methodology to examine the hypothesis that Shakespeare wrote all the disputed plays traditionally attributed to him. More specifically, the mathematically based methodology used here is based on Mean Proximity, as a linear hierarchical clustering method, and on Principal Components Analysis, as a non-hierarchical linear clustering method. It is also based, for the first time in the domain, on Self-Organizing Map U-Matrix and Voronoi Map, as non-linear clustering methods to cover the possibility that our data contains significant non-linearities. Vector Space Model (VSM is used to convert texts into vectors in a high dimensional space. The aim of which is to compare the degrees of similarity within and between limited samples of text (the disputed plays. The various works and plays assumed to have been written by Shakespeare and possible authors notably, Sir Francis Bacon, Christopher Marlowe, John Fletcher, and Thomas Kyd, where “similarity” is defined in terms of correlation/distance coefficient measure based on the frequency of usage profiles of function words, word bi-grams, and character triple-grams. The claim that Shakespeare authored all the disputed
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
Libraries for spectrum identification: Method of normalized coordinates versus linear correlation
Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.
2008-01-01
In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Camporesi, Roberto
2011-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…
An Evaluation of Five Linear Equating Methods for the NEAT Design
Mroch, Andrew A.; Suh, Youngsuk; Kane, Michael T.; Ripkey, Douglas R.
2009-01-01
This study uses the results of two previous papers (Kane, Mroch, Suh, & Ripkey, this issue; Suh, Mroch, Kane, & Ripkey, this issue) and the literature on linear equating to evaluate five linear equating methods along several dimensions, including the plausibility of their assumptions and their levels of bias and root mean squared difference…
Genomic prediction based on data from three layer lines: a comparison between linear methods
Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.
2014-01-01
Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we
An introduction to fuzzy linear programming problems theory, methods and applications
Kaur, Jagdeep
2016-01-01
The book presents a snapshot of the state of the art in the field of fully fuzzy linear programming. The main focus is on showing current methods for finding the fuzzy optimal solution of fully fuzzy linear programming problems in which all the parameters and decision variables are represented by non-negative fuzzy numbers. It presents new methods developed by the authors, as well as existing methods developed by others, and their application to real-world problems, including fuzzy transportation problems. Moreover, it compares the outcomes of the different methods and discusses their advantages/disadvantages. As the first work to collect at one place the most important methods for solving fuzzy linear programming problems, the book represents a useful reference guide for students and researchers, providing them with the necessary theoretical and practical knowledge to deal with linear programming problems under uncertainty.
Improvement of linear reactivity methods and application to long range fuel management
Woehlke, R.A.; Quan, B.L.
1982-01-01
The original development of the linear reactivity theory assumes flat burnup, batch by batch. The validity of this assumption is explored using multicycle burnup data generated with a detailed 3-D SIMULATE model. The results show that the linear reactivity method can be improved by correcting for batchwise power sharing. The application of linear reactivity to long range fuel management is demonstrated in several examples. Correcting for batchwise power sharing improves the accuracy of the analysis. However, with regard to the sensitivity of fuel cost to changes in various parameters, the corrected and uncorrected linear reactivity theories give remarkably similar results
On Extended Exponential General Linear Methods PSQ with S>Q ...
This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...
Electric field control methods for foil coils in high-voltage linear actuators
Beek, van T.A.; Jansen, J.W.; Lomonova, E.A.
2015-01-01
This paper describes multiple electric field control methods for foil coils in high-voltage coreless linear actuators. The field control methods are evaluated using 2-D and 3-D boundary element methods. A comparison is presented between the field control methods and their ability to mitigate
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.; Savostianov, I.; Hundsdorfer, W.
2015-01-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.
2015-05-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis
Jeffrey, Alan
1971-01-01
The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)
A method for computing the stationary points of a function subject to linear equality constraints
Uko, U.L.
1989-09-01
We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs
Wang Wansheng; Li Shoufu; Wang Wenqiang
2009-01-01
In this paper, we show that under identical conditions which guarantee the contractivity of the theoretical solutions of general nonlinear NDDEs, the numerical solutions obtained by a class of linear multistep methods are also contractive.
Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro
1995-01-01
The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method
Linear, Transﬁnite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2018-01-01
of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...
Tutcuoglu, A.; Majidi, C.
2014-12-01
Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.
Mezhov, E.A.; Khananashvili, N.L.; Shmidt, V.S.
1988-01-01
A linear correlation has been established between the solubility of water in water-immiscible organic solvents and the interfacial tension at the water-solvent interface on the one hand and the parameters of the SE* and π* scales for these solvents on the other hand. This allows us, using the known tabulated SE* or π* parameters for each solvent, to predict the values of the interfacial tension and the solubility of water for the corresponding systems. We have shown that the SE* scale allows us to predict these values more accurately than other known solvent scales, since in contrast to other scales it characterizes solvents found in equilibrium with water
Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi
2017-10-11
We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.
Ai-Min Yang
2014-01-01
Full Text Available The local fractional Laplace variational iteration method was applied to solve the linear local fractional partial differential equations. The local fractional Laplace variational iteration method is coupled by the local fractional variational iteration method and Laplace transform. The nondifferentiable approximate solutions are obtained and their graphs are also shown.
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
A Fifth Order Hybrid Linear Multistep method For the Direct Solution ...
A linear multistep hybrid method (LMHM)with continuous coefficients isconsidered and directly applied to solve third order initial and boundary value problems (IBVPs). The continuous method is used to obtain Multiple Finite Difference Methods (MFDMs) (each of order 5) which are combined as simultaneous numerical ...
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Development of pre-critical excore detector linear subchannel calibration method
Choi, Yoo Sun; Goo, Bon Seung; Cha, Kyun Ho; Lee, Chang Seop; Kim, Yong Hee; Ahn, Chul Soo; Kim, Man Soo
2001-01-01
The improved pre-critical excore detector linear subchannel calibration method has been developed to improve the applicability of pre-critical calibration method. The existing calibration method does not always guarantee the accuracy of pre-critical calibration because the calibration results of the previous cycle are not reflected into the current cycle calibration. The developed method has a desirable feature that calibration error would not be propagated in the following cycles since the calibration data determined in previous cycle is incorporated in the current cycle calibration. The pre-critical excore detector linear calibration is tested for YGN unit 3 and UCN unit 3 to evaluate its characteristics and accuracy
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M
2014-11-01
We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.
Camporesi, Roberto
2016-01-01
This book presents a method for solving linear ordinary differential equations based on the factorization of the differential operator. The approach for the case of constant coefficients is elementary, and only requires a basic knowledge of calculus and linear algebra. In particular, the book avoids the use of distribution theory, as well as the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The case of variable coefficients is addressed using Mammana’s result for the factorization of a real linear ordinary differential operator into a product of first-order (complex) factors, as well as a recent generalization of this result to the case of complex-valued coefficients.
An explicit method in non-linear soil-structure interaction
Kunar, R.R.
1981-01-01
The explicit method of analysis in the time domain is ideally suited for the solution of transient dynamic non-linear problems. Though the method is not new, its application to seismic soil-structure interaction is relatively new and deserving of public discussion. This paper describes the principles of the explicit approach in soil-structure interaction and it presents a simple algorithm that can be used in the development of explicit computer codes. The paper also discusses some of the practical considerations like non-reflecting boundaries and time steps. The practicality of the method is demonstrated using a computer code, PRESS, which is used to compare the treatment of strain-dependent properties using average strain levels over the whole time history (the equivalent linear method) and using the actual strain levels at every time step to modify the soil properties (non-linear method). (orig.)
Uniform irradiation using rotational-linear scanning method for narrow synchrotron radiation beam
Nariyama, N.; Ohnishi, S.; Odano, N.
2004-01-01
At SPring-8, photon intensity monitors for synchrotron radiation have been developed. Using these monitors, the responses of radiation detectors and dosimeters to monoenergetic photons can be measured. In most cases, uniform irradiation to the sample is necessary. Here, two scanning methods are proposed. One is an XZ-linear scanning method, which moves the sample simultaneously in both the X and Z direction, that is, in zigzag fashion. The other is a rotational-linear scanning method, which rotates the sample moving in the X direction. To investigate the validity of the two methods, thermoluminescent dosimeters were irradiated with a broad synchrotron-radiation beam, and the readings from the two methods were compared with that of the dosimeters fixed in the beam. The results for both scanning methods virtually agreed with that of the fixed method. The advantages of the rotational-linear scanning method are that low- and medium-dose irradiation is possible, uniformity is excellent and the load to the scanning equipment is light: hence, this method is superior to the XZ-linear scanning method for most applications. (author)
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)
2015-01-21
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Numerical Methods for Solution of the Extended Linear Quadratic Control Problem
Jørgensen, John Bagterp; Frison, Gianluca; Gade-Nielsen, Nicolai Fog
2012-01-01
In this paper we present the extended linear quadratic control problem, its efficient solution, and a discussion of how it arises in the numerical solution of nonlinear model predictive control problems. The extended linear quadratic control problem is the optimal control problem corresponding...... to the Karush-Kuhn-Tucker system that constitute the majority of computational work in constrained nonlinear and linear model predictive control problems solved by efficient MPC-tailored interior-point and active-set algorithms. We state various methods of solving the extended linear quadratic control problem...... and discuss instances in which it arises. The methods discussed in the paper have been implemented in efficient C code for both CPUs and GPUs for a number of test examples....
Lu Li; Yang Yiren
2009-01-01
The responses and limit cycle flutter of a plate-type structure with cubic stiffness in viscous flow were studied. The continuous system was dispersed by utilizing Galerkin Method. The equivalent linearization concept was performed to predict the ranges of limit cycle flutter velocities. The coupled map of flutter amplitude-equivalent linear stiffness-critical velocity was used to analyze the stability of limit cycle flutter. The theoretical results agree well with the results of numerical integration, which indicates that the equivalent linearization concept is available to the analysis of limit cycle flutter of plate-type structure. (authors)
F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger
2012-01-01
Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...
General methods for determining the linear stability of coronal magnetic fields
Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.
1988-01-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.
Non-linear shape functions over time in the space-time finite element method
Kacprzyk Zbigniew
2017-01-01
Full Text Available This work presents a generalisation of the space-time finite element method proposed by Kączkowski in his seminal of 1970’s and early 1980’s works. Kączkowski used linear shape functions in time. The recurrence formula obtained by Kączkowski was conditionally stable. In this paper, non-linear shape functions in time are proposed.
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Treating experimental data of inverse kinetic method by unitary linear regression analysis
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Gyungho Khim
2015-01-01
Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok
2015-01-01
We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715
On a new iterative method for solving linear systems and comparison results
Jing, Yan-Fei; Huang, Ting-Zhu
2008-10-01
In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.
Şuayip Yüzbaşı
2017-03-01
Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.
Methods of measurement of integral and differential linearity distortions of spectrometry sets
Fuan, Jacques; Grimont, Bernard; Marin, Roland; Richard, Jean-Pierre
1969-05-01
The objective of this document is to describe different measurement methods, and more particularly to present a software for the processing of obtained results in order to avoid interpretation by the investigator. In a first part, the authors define the parameters of integral and differential linearity, outlines their importance in measurements performed by spectrometry, and describe the use of these parameters. In the second part, they propose various methods of measurement of these linearity parameters, report experimental applications of these methods and compare the obtained results
Novel method of interpolation and extrapolation of functions by a linear initial value problem
Shatalov, M
2008-09-01
Full Text Available A novel method of function approximation using an initial value, linear, ordinary differential equation (ODE) is presented. The main advantage of this method is to obtain the approximation expressions in a closed form. This technique can be taught...
Jen-Yuan Chen
2014-01-01
Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.
Thompson, Russel L.
Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…
Experimental validation for calcul methods of structures having shock non-linearity
Brochard, D.; Buland, P.
1987-01-01
For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr
Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method
Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui
1991-01-01
A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients
Effective linear two-body method for many-body problems in atomic and nuclear physics
Kim, Y.E.; Zubarev, A.L.
2000-01-01
We present an equivalent linear two-body method for the many body problem, which is based on an approximate reduction of the many-body Schroedinger equation by the use of a variational principle. The method is applied to several problems in atomic and nuclear physics. (author)
Engineered high expansion glass-ceramics having near linear thermal strain and methods thereof
Dai, Steve Xunhu; Rodriguez, Mark A.; Lyon, Nathanael L.
2018-01-30
The present invention relates to glass-ceramic compositions, as well as methods for forming such composition. In particular, the compositions include various polymorphs of silica that provide beneficial thermal expansion characteristics (e.g., a near linear thermal strain). Also described are methods of forming such compositions, as well as connectors including hermetic seals containing such compositions.
A block Krylov subspace time-exact solution method for linear ordinary differential equation systems
Bochev, Mikhail A.
2013-01-01
We propose a time-exact Krylov-subspace-based method for solving linear ordinary differential equation systems of the form $y'=-Ay+g(t)$ and $y"=-Ay+g(t)$, where $y(t)$ is the unknown function. The method consists of two stages. The first stage is an accurate piecewise polynomial approximation of
A study on linear and nonlinear Schrodinger equations by the variational iteration method
Wazwaz, Abdul-Majid
2008-01-01
In this work, we introduce a framework to obtain exact solutions to linear and nonlinear Schrodinger equations. The He's variational iteration method (VIM) is used for analytic treatment of these equations. Numerical examples are tested to show the pertinent features of this method
Hong, Ser Gi; Kim, Jong Woon; Lee, Young Ouk; Kim, Kyo Youn
2010-01-01
The subcell balance methods have been developed for one- and two-dimensional SN transport calculations. In this paper, a linear discontinuous expansion method using sub-cell balances (LDEM-SCB) is developed for neutral particle S N transport calculations in 3D unstructured geometrical problems. At present, this method is applied to the tetrahedral meshes. As the name means, this method assumes the linear distribution of the particle flux in each tetrahedral mesh and uses the balance equations for four sub-cells of each tetrahedral mesh to obtain the equations for the four sub-cell average fluxes which are unknowns. This method was implemented in the computer code MUST (Multi-group Unstructured geometry S N Transport). The numerical tests show that this method gives more robust solution than DFEM (Discontinuous Finite Element Method)
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis
2016-09-08
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.
The Tunneling Method for Global Optimization in Multidimensional Scaling.
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
On Feature Extraction from Large Scale Linear LiDAR Data
Acharjee, Partha Pratim
Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are
A discrete homotopy perturbation method for non-linear Schrodinger equation
H. A. Wahab
2015-12-01
Full Text Available A general analysis is made by homotopy perturbation method while taking the advantages of the initial guess, appearance of the embedding parameter, different choices of the linear operator to the approximated solution to the non-linear Schrodinger equation. We are not dependent upon the Adomian polynomials and find the linear forms of the components without these calculations. The discretised forms of the nonlinear Schrodinger equation allow us whether to apply any numerical technique on the discritisation forms or proceed for perturbation solution of the problem. The discretised forms obtained by constructed homotopy provide the linear parts of the components of the solution series and hence a new discretised form is obtained. The general discretised form for the NLSE allows us to choose any initial guess and the solution in the closed form.
Analytical study of dynamic aperture for storage ring by using successive linearization method
Yang Jiancheng; Xia Jiawen; Wu Junxia; Xia Guoxing; Liu Wei; Yin Xuejun
2004-01-01
The determination of dynamic aperture is a critical issue in circular accelerator. In this paper, authors solved the equation of motion including non-linear forces by using successive linearization method and got a criterion for the determining of the dynamic aperture of the machine. Applying this criterion, a storage ring with FODO lattice has been studied. The results are agree well with the tracking results in a large range of linear turn (Q). The purpose is to improve our understanding of the mechanisms driving the particle motion in the presence of non-linear forces and got another mechanism driving instability of particle in storage ring-parametric resonance caused by 'fluctuating transfer matrices' at small amplification
2007-01-01
Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.
A New Spectral Local Linearization Method for Nonlinear Boundary Layer Flow Problems
S. S. Motsa
2013-01-01
Full Text Available We propose a simple and efficient method for solving highly nonlinear systems of boundary layer flow problems with exponentially decaying profiles. The algorithm of the proposed method is based on an innovative idea of linearizing and decoupling the governing systems of equations and reducing them into a sequence of subsystems of differential equations which are solved using spectral collocation methods. The applicability of the proposed method, hereinafter referred to as the spectral local linearization method (SLLM, is tested on some well-known boundary layer flow equations. The numerical results presented in this investigation indicate that the proposed method, despite being easy to develop and numerically implement, is very robust in that it converges rapidly to yield accurate results and is more efficient in solving very large systems of nonlinear boundary value problems of the similarity variable boundary layer type. The accuracy and numerical stability of the SLLM can further be improved by using successive overrelaxation techniques.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
Hussein Abdel-jaber
2015-10-01
Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.
Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.
1995-01-01
The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs
Heinz Toparkus
2014-04-01
Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.
Ikuno, Soichiro; Chen, Gong; Yamamoto, Susumu; Itoh, Taku; Abe, Kuniyoshi; Nakamura, Hiroaki
2016-01-01
Krylov subspace method and the variable preconditioned Krylov subspace method with communication avoiding technique for a linear system obtained from electromagnetic analysis are numerically investigated. In the k−skip Krylov method, the inner product calculations are expanded by Krylov basis, and the inner product calculations are transformed to the scholar operations. k−skip CG method is applied for the inner-loop solver of Variable Preconditioned Krylov subspace methods, and the converged solution of electromagnetic problem is obtained using the method. (author)
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Salih Yalcinbas
2016-01-01
Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.
Measurements of linear attenuation coefficients of irregular shaped samples by two media method
Singh, Sukhpal; Kumar, Ashok; Thind, Kulwant Singh; Mudahar, Gurmel S.
2008-01-01
The linear attenuation coefficient values of regular and irregular shaped flyash materials have been measured without knowing the thickness of a sample using a new technique namely 'two media method'. These values have also been measured with a standard gamma ray transmission method and obtained theoretically with winXCOM computer code. From the comparison it is reported that the two media method has given accurate results of attenuation coefficients of flyash materials
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a
Modelling across bioreactor scales: methods, challenges and limitations
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Hine, Nicholas D M; Dziedzic, Jacek; Haynes, Peter D; Skylaris, Chris-Kriton
2011-11-28
We present a comparison of methods for treating the electrostatic interactions of finite, isolated systems within periodic boundary conditions (PBCs), within density functional theory (DFT), with particular emphasis on linear-scaling (LS) DFT. Often, PBCs are not physically realistic but are an unavoidable consequence of the choice of basis set and the efficacy of using Fourier transforms to compute the Hartree potential. In such cases the effects of PBCs on the calculations need to be avoided, so that the results obtained represent the open rather than the periodic boundary. The very large systems encountered in LS-DFT make the demands of the supercell approximation for isolated systems more difficult to manage, and we show cases where the open boundary (infinite cell) result cannot be obtained from extrapolation of calculations from periodic cells of increasing size. We discuss, implement, and test three very different approaches for overcoming or circumventing the effects of PBCs: truncation of the Coulomb interaction combined with padding of the simulation cell, approaches based on the minimum image convention, and the explicit use of open boundary conditions (OBCs). We have implemented these approaches in the ONETEP LS-DFT program and applied them to a range of systems, including a polar nanorod and a protein. We compare their accuracy, complexity, and rate of convergence with simulation cell size. We demonstrate that corrective approaches within PBCs can achieve the OBC result more efficiently and accurately than pure OBC approaches.
Two new modified Gauss-Seidel methods for linear system with M-matrices
Zheng, Bing; Miao, Shu-Xin
2009-12-01
In 2002, H. Kotakemori et al. proposed the modified Gauss-Seidel (MGS) method for solving the linear system with the preconditioner [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner () J. Comput. Appl. Math. 145 (2002) 373-378]. Since this preconditioner is constructed by only the largest element on each row of the upper triangular part of the coefficient matrix, the preconditioning effect is not observed on the nth row. In the present paper, to deal with this drawback, we propose two new preconditioners. The convergence and comparison theorems of the modified Gauss-Seidel methods with these two preconditioners for solving the linear system are established. The convergence rates of the new proposed preconditioned methods are compared. In addition, numerical experiments are used to show the effectiveness of the new MGS methods.
Jimenez, J.C.
2009-06-01
Local Linearization (LL) methods conform a class of one-step explicit integrators for ODEs derived from the following primary and common strategy: the vector field of the differential equation is locally (piecewise) approximated through a first-order Taylor expansion at each time step, thus obtaining successive linear equations that are explicitly integrated. Hereafter, the LL approach may include some additional strategies to improve that basic affine approximation. Theoretical and practical results have shown that the LL integrators have a number of convenient properties. These include arbitrary order of convergence, A-stability, linearization preserving, regularity under quite general conditions, preservation of the dynamics of the exact solution around hyperbolic equilibrium points and periodic orbits, integration of stiff and high-dimensional equations, low computational cost, and others. In this paper, a review of the LL methods and their properties is presented. (author)
Comparison of different methods for the solution of sets of linear equations
Bilfinger, T.; Schmidt, F.
1978-06-01
The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de
Robust fault detection of linear systems using a computationally efficient set-membership method
Tabatabaeipour, Mojtaba; Bak, Thomas
2014-01-01
In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....
An overview of solution methods for multi-objective mixed integer linear programming programs
Andersen, Kim Allan; Stidsen, Thomas Riis
Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Linear shrinkage test: justification for its reintroduction as a standard South African test method
Sampson, LR
2009-06-04
Full Text Available Several problems with the linear shrinkage test specified in Method A4 of the THM 1 1979 were addressed as part of this investigation in an effort to improve the alleged poor reproducibility of the test and justify its reintroduction into THM 1. A...
Fusco, D [Messina Univ. (Italy). Instituto de Matematica
1979-01-01
The paper is concerned with a three-dimensional theory of non-linear magnetosonic waves in a turbulent plasma. A perturbation method is used that allows a transport equation, like Burgers equation but with a variable coefficient to be obtained.
В.Т. Чемерис
2006-04-01
Full Text Available There is a method of simplified calculation and design parameters choice elaborated in this article with corresponding basing for the induction system of electron-beam sterilizer on the base of linear induction accelerator taking into account the parameters of magnetic material for production of cores and parameters of pulsed voltage.
The H-N method for solving linear transport equation: theory and application
Kaskas, A.; Gulecyuz, M.C.; Tezcan, C.
2002-01-01
The system of singular integral equation which is obtained from the integro-differential form of the linear transport equation as a result of Placzec lemma is solved. Application are given using the exit distributions and the infinite medium Green's function. The same theoretical results are also obtained with the use of the singular eigenfunction of the method of elementary solutions
Tuereci, R. Goekhan [Kirikkale Univ. (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School
2017-11-15
One speed, time independent and homogeneous medium neutron transport equation is solved with the anisotropic scattering which includes both the linearly and the quadratically anisotropic scattering kernel. Having written Case's eigenfunctions and the orthogonality relations among of these eigenfunctions, slab albedo problem is investigated as numerically by using Modified F{sub N} method. Selected numerical results are presented in tables.
An Empirical Comparison of Five Linear Equating Methods for the NEAT Design
Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.
2009-01-01
In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…
A Revised Piecewise Linear Recursive Convolution FDTD Method for Magnetized Plasmas
Liu Song; Zhong Shuangying; Liu Shaobin
2005-01-01
The piecewise linear recursive convolution (PLRC) finite-different time-domain (FDTD) method improves accuracy over the original recursive convolution (RC) FDTD approach and current density convolution (JEC) but retains their advantages in speed and efficiency. This paper describes a revised piecewise linear recursive convolution PLRC-FDTD formulation for magnetized plasma which incorporates both anisotropy and frequency dispersion at the same time, enabling the transient analysis of magnetized plasma media. The technique is illustrated by numerical simulations of the reflection and transmission coefficients through a magnetized plasma layer. The results show that the revised PLRC-FDTD method has improved the accuracy over the original RC FDTD method and JEC FDTD method
Testing linear growth rate formulas of non-scale endogenous growth models
Ziesemer, Thomas
2017-01-01
Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of
High-performance small-scale solvers for linear Model Predictive Control
Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd
2014-01-01
, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...
Towards TeV-scale electron-positron collisions: the Compact Linear Collider (CLIC)
Doebert, Steffen; Sicking, Eva
2018-02-01
The Compact Linear Collider (CLIC), a future electron-positron collider at the energy frontier, has the potential to change our understanding of the universe. Proposed to follow the Large Hardron Collider (LHC) programme at CERN, it is conceived for precision measurements as well as for searches for new phenomena.
Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.
2006-01-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...
Interior-Point Method for Non-Linear Non-Convex Optimization
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2004-01-01
Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004
Method for solving fully fuzzy linear programming problems using deviation degree measure
Haifang Cheng; Weilai Huang; Jianhu Cai
2013-01-01
A new ful y fuzzy linear programming (FFLP) prob-lem with fuzzy equality constraints is discussed. Using deviation degree measures, the FFLP problem is transformed into a crispδ-parametric linear programming (LP) problem. Giving the value of deviation degree in each constraint, the δ-fuzzy optimal so-lution of the FFLP problem can be obtained by solving this LP problem. An algorithm is also proposed to find a balance-fuzzy optimal solution between two goals in conflict: to improve the va-lues of the objective function and to decrease the values of the deviation degrees. A numerical example is solved to il ustrate the proposed method.
Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.
Ying, Xiaoguo; Lin, Han; Hui, Guohua
2015-01-01
Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.
Large-scale synthesis of YSZ nanopowder by Pechini method
Administrator
structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...
Restoring the missing features of the corrupted speech using linear interpolation methods
Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.
2017-10-01
One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.
Sanchez, Richard.
1975-11-01
The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr
Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method
Seçil YALAZ
2016-10-01
Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures.
A linear multiple balance method for discrete ordinates neutron transport equations
Park, Chang Je; Cho, Nam Zin
2000-01-01
A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient
Solutions of First-Order Volterra Type Linear Integrodifferential Equations by Collocation Method
Olumuyiwa A. Agbolade
2017-01-01
Full Text Available The numerical solutions of linear integrodifferential equations of Volterra type have been considered. Power series is used as the basis polynomial to approximate the solution of the problem. Furthermore, standard and Chebyshev-Gauss-Lobatto collocation points were, respectively, chosen to collocate the approximate solution. Numerical experiments are performed on some sample problems already solved by homotopy analysis method and finite difference methods. Comparison of the absolute error is obtained from the present method and those from aforementioned methods. It is also observed that the absolute errors obtained are very low establishing convergence and computational efficiency.
Exact solution to the Coulomb wave using the linearized phase-amplitude method
Shuji Kiyokawa
2015-08-01
Full Text Available The author shows that the amplitude equation from the phase-amplitude method of calculating continuum wave functions can be linearized into a 3rd-order differential equation. Using this linearized equation, in the case of the Coulomb potential, the author also shows that the amplitude function has an analytically exact solution represented by means of an irregular confluent hypergeometric function. Furthermore, it is shown that the exact solution for the Coulomb potential reproduces the wave function for free space expressed by the spherical Bessel function. The amplitude equation for the large component of the Dirac spinor is also shown to be the linearized 3rd-order differential equation.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, S. R.; Wilson, P. P. H. [Engineering Physics Department, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Evans, T. M. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37830 (United States)
2013-07-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.
2013-01-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
Linear least-squares method for global luminescent oil film skin friction field analysis
Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu
2018-06-01
A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.
Imprint of non-linear effects on HI intensity mapping on large scales
Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Methods of scaling threshold color difference using printed samples
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow
Kou, Jisheng
2017-12-06
In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.
An Online Method for Interpolating Linear Parametric Reduced-Order Models
Amsallem, David; Farhat, Charbel
2011-01-01
A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.
Two media method for linear attenuation coefficient determination of irregular soil samples
Vici, Carlos Henrique Georges
2004-01-01
In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)
Larsen, E.W.; Alcouffe, R.E.
1981-01-01
In this article a new linear characteristic (LC) spatial differencing scheme for the discrete ordinates equations in (x,y)-geometry is described and numerical comparisons are given with the diamond difference (DD) method. The LC method is more stable with mesh size and is generally much more accurate than the DD method on both fine and coarse meshes, for eigenvalue and deep penetration problems. The LC method is based on computations involving the exact solution of a cell problem which has spatially linear boundary conditions and interior source. The LC method is coupled to the diffusion synthetic acceleration (DSA) algorithm in that the linear variations of the source are determined in part by the results of the DSA calculation from the previous inner iteration. An inexpensive negative-flux fixup is used which has very little effect on the accuracy of the solution. The storage requirements for LC are essentially the same as that for DD, while the computational times for LC are generally less than twice the DD computational times for the same mesh. This increase in computational cost is offset if one computes LC solutions on somewhat coarser meshes than DD; the resulting LC solutions are still generally much more accurate than the DD solutions. (orig.) [de
A NDVI assisted remote sensing image adaptive scale segmentation method
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Deposit and scale prevention methods in thermal sea water desalination
Froehner, K.R.
1977-01-01
Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de
Elements of a method to scale ignition reactor Tokamak
Cotsaftis, M.
1984-08-01
Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports
Method of producing nano-scaled inorganic platelets
Zhamu, Aruna; Jang, Bor Z.
2012-11-13
The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour; Chá con-Rebollo, Tomas
2015-01-01
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base
Liebrecht, M.
2014-01-01
The importance of van der Waals interactions in many diverse research fields such as, e. g., polymer science, nano--materials, structural biology, surface science and condensed matter physics created a high demand for efficient and accurate methods that can describe van der Waals interactions from first principles. These methods should be able to deal with large and complex systems to predict functions and properties of materials that are technologically and biologically relevant. Van der Waals interactions arise due to quantum mechanical correlation effects and finding appropriate models an numerical techniques to describe this type of interaction is still an ongoing challenge in electronic structure and condensed matter theory. This thesis introduces a new variational approach to obtain intermolecular interaction potentials between clusters and helium atoms by means of density functional theory and linear response methods. It scales almost linearly with the number of electrons and can therefore be applied to much larger systems than standard quantum chemistry techniques. The main focus of this work is the development of an ab-initio method to account for London dispersion forces, which are purely attractive and dominate the interaction of non--polar atoms and molecules at large distances. (author) [de
Merton, S. R.; Smedley-Stevenson, R. P.; Pain, C. C.; Buchan, A. G.; Eaton, M. D.
2009-01-01
This paper describes a new Non-Linear Discontinuous Petrov-Galerkin (NDPG) method and application to the one-speed Boltzmann Transport Equation (BTE) for space-time problems. The purpose of the method is to remove unwanted oscillations in the transport solution which occur in the vicinity of sharp flux gradients, while improving computational efficiency and numerical accuracy. This is achieved by applying artificial dissipation in the solution gradient direction, internal to an element using a novel finite element (FE) Riemann approach. The amount of dissipation added acts internal to each element. This is done using a gradient-informed scaling of the advection velocities in the stabilisation term. This makes the method in its most general form non-linear. The method is designed to be independent of angular expansion framework. This is demonstrated for the both discrete ordinates (S N ) and spherical harmonics (P N ) descriptions of the angular variable. Results show the scheme performs consistently well in demanding time dependent and multi-dimensional radiation transport problems. (authors)
A New Method for Non-linear and Non-stationary Time Series Analysis:
The Hilbert Spectral Analysis
CERN. Geneva
2000-01-01
A new method for analysing non-linear and non-stationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero crossing and extreme, and also having symmetric envelopes defined by the local maximal and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to non-linear and non-stationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Classical non-l...
Non-linear analysis of skew thin plate by finite difference method
Kim, Chi Kyung; Hwang, Myung Hwan
2012-01-01
This paper deals with a discrete analysis capability for predicting the geometrically nonlinear behavior of skew thin plate subjected to uniform pressure. The differential equations are discretized by means of the finite difference method which are used to determine the deflections and the in-plane stress functions of plates and reduced to several sets of linear algebraic simultaneous equations. For the geometrically non-linear, large deflection behavior of the plate, the non-linear plate theory is used for the analysis. An iterative scheme is employed to solve these quasi-linear algebraic equations. Several problems are solved which illustrate the potential of the method for predicting the finite deflection and stress. For increasing lateral pressures, the maximum principal tensile stress occurs at the center of the plate and migrates toward the corners as the load increases. It was deemed important to describe the locations of the maximum principal tensile stress as it occurs. The load-deflection relations and the maximum bending and membrane stresses for each case are presented and discussed
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.
Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W
2005-01-01
Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.
Giuliano de Oliveira Freitas
2013-10-01
Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.
A Novel Method of Robust Trajectory Linearization Control Based on Disturbance Rejection
Xingling Shao
2014-01-01
Full Text Available A novel method of robust trajectory linearization control for a class of nonlinear systems with uncertainties based on disturbance rejection is proposed. Firstly, on the basis of trajectory linearization control (TLC method, a feedback linearization based control law is designed to transform the original tracking error dynamics to the canonical integral-chain form. To address the issue of reducing the influence made by uncertainties, with tracking error as input, linear extended state observer (LESO is constructed to estimate the tracking error vector, as well as the uncertainties in an integrated manner. Meanwhile, the boundedness of the estimated error is investigated by theoretical analysis. In addition, decoupled controller (which has the characteristic of well-tuning and simple form based on LESO is synthesized to realize the output tracking for closed-loop system. The closed-loop stability of the system under the proposed LESO-based control structure is established. Also, simulation results are presented to illustrate the effectiveness of the control strategy.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian
2017-04-11
A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.
Shuke, Noriyuki
1991-01-01
In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
A Projected Non-linear Conjugate Gradient Method for Interactive Inverse Kinematics
Engell-Nørregård, Morten; Erleben, Kenny
2009-01-01
Inverse kinematics is the problem of posing an articulated figure to obtain a wanted goal, without regarding inertia and forces. Joint limits are modeled as bounds on individual degrees of freedom, leading to a box-constrained optimization problem. We present A projected Non-linear Conjugate...... Gradient optimization method suitable for box-constrained optimization problems for inverse kinematics. We show application on inverse kinematics positioning of a human figure. Performance is measured and compared to a traditional Jacobian Transpose method. Visual quality of the developed method...
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems
Oskar Cahueñas
2013-05-01
Full Text Available We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-01-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour
2015-01-07
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.
A large-scale linear complementarity model of the North American natural gas market
Gabriel, Steven A.; Jifang Zhuang; Kiet, Supat
2005-01-01
The North American natural gas market has seen significant changes recently due to deregulation and restructuring. For example, third party marketers can contract for transportation and purchase of gas to sell to end-users. While the intent was a more competitive market, the potential for market power exists. We analyze this market using a linear complementarity equilibrium model including producers, storage and peak gas operators, third party marketers and four end-use sectors. The marketers are depicted as Nash-Cournot players determining supply to meet end-use consumption, all other players are in perfect competition. Results based on National Petroleum Council scenarios are presented. (Author)
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer
Iqbal, Javed; Yahia, I. S.; Zahran, H. Y.; AlFaify, S.; AlBassam, A. M.; El-Naggar, A. M.
2016-12-01
2‧,7‧ dichloro-Fluorescein (DCF) is a promising organic semiconductor material in different technological aspects such as solar cell, photodiode, Schottky diode. DCF thin film/conductive glass (FTO glass) was prepared by a low-cost spin coating technique. The spectrophotometric data such as the absorbance, reflectance and transmittance were cogitated in the 350-2500 nm wavelength range, at the normal incidence. The absorption (n) and linear refractive indices (k) were computed using the Fresnel's equations. The optical band gap was evaluated and it was found that there is two band gap described as follows: (1) It is related to the band gap of FTO/glass which is equal 3.4 eV and (2) the second one is related to the absorption edge of DCF equals 2.25 eV. The non-linear parameters such as the refractive index (n2) and optical susceptibility χ(3) were evaluated by the spectroscopic method based on the refractive index. Both (n2) and χ(3) increased rapidly on increasing the wavelength with redshift absorption. Our work represents a new idea about using FTO glass for a new generation of the optical device and technology.
Chein-Shan Liu
2012-04-01
Full Text Available It is well known that the numerical algorithms of the steepest descent method (SDM, and the conjugate gradient method (CGM are effective for solving well-posed linear systems. However, they are vulnerable to noisy disturbance for solving ill-posed linear systems. We propose the modifications of SDM and CGM, namely the modified steepest descent method (MSDM, and the modified conjugate gradient method (MCGM. The starting point is an invariant manifold defined in terms of a minimum functional and a fictitious time-like variable; however, in the final stage we can derive a purely iterative algorithm including an acceleration parameter. Through the Hopf bifurcation, this parameter indeed plays a major role to switch the situation of slow convergence to a new situation that the functional is stepwisely decreased very fast. Several numerical examples are examined and compared with exact solutions, revealing that the new algorithms of MSDM and MCGM have good computational efficiency and accuracy, even for the highly ill-conditioned linear equations system with a large noise being imposed on the given data.
VLSI scaling methods and low power CMOS buffer circuit
Sharma Vijay Kumar; Pattanaik Manisha
2013-01-01
Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y
2008-01-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.
2008-07-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.
Linear programming models and methods of matrix games with payoffs of triangular fuzzy numbers
Li, Deng-Feng
2016-01-01
This book addresses two-person zero-sum finite games in which the payoffs in any situation are expressed with fuzzy numbers. The purpose of this book is to develop a suite of effective and efficient linear programming models and methods for solving matrix games with payoffs in fuzzy numbers. Divided into six chapters, it discusses the concepts of solutions of matrix games with payoffs of intervals, along with their linear programming models and methods. Furthermore, it is directly relevant to the research field of matrix games under uncertain economic management. The book offers a valuable resource for readers involved in theoretical research and practical applications from a range of different fields including game theory, operational research, management science, fuzzy mathematical programming, fuzzy mathematics, industrial engineering, business and social economics. .
Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis
Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group
2004-07-01
Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.
Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng
2016-08-25
The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.
Acceleration of step and linear discontinuous schemes for the method of characteristics in DRAGON5
Alain Hébert
2017-09-01
Full Text Available The applicability of the algebraic collapsing acceleration (ACA technique to the method of characteristics (MOC in cases with scattering anisotropy and/or linear sources was investigated. Previously, the ACA was proven successful in cases with isotropic scattering and uniform (step sources. A presentation is first made of the MOC implementation, available in the DRAGON5 code. Two categories of schemes are available for integrating the propagation equations: (1 the first category is based on exact integration and leads to the classical step characteristics (SC and linear discontinuous characteristics (LDC schemes and (2 the second category leads to diamond differencing schemes of various orders in space. The acceleration of these MOC schemes using a combination of the generalized minimal residual [GMRES(m] method preconditioned with the ACA technique was focused on. Numerical results are provided for a two-dimensional (2D eight-symmetry pressurized water reactor (PWR assembly mockup in the context of the DRAGON5 code.
Stability of numerical method for semi-linear stochastic pantograph differential equations
Yu Zhang
2016-01-01
Full Text Available Abstract As a particular expression of stochastic delay differential equations, stochastic pantograph differential equations have been widely used in nonlinear dynamics, quantum mechanics, and electrodynamics. In this paper, we mainly study the stability of analytical solutions and numerical solutions of semi-linear stochastic pantograph differential equations. Some suitable conditions for the mean-square stability of an analytical solution are obtained. Then we proved the general mean-square stability of the exponential Euler method for a numerical solution of semi-linear stochastic pantograph differential equations, that is, if an analytical solution is stable, then the exponential Euler method applied to the system is mean-square stable for arbitrary step-size h > 0 $h>0$ . Numerical examples further illustrate the obtained theoretical results.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies
2018-03-29
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.
Tomas, Jose M.; Oliver, Amparo
1999-01-01
Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)
A multi-scale method of mapping urban influence
Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters
2009-01-01
Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
An enhanced finite volume method to model 2D linear elastic structures
Suliman, Ridhwaan
2014-04-01
Full Text Available . Suliman) Preprint submitted to Applied Mathematical Modelling July 22, 2013 Keywords: finite volume, finite element, locking, error analysis 1. Introduction Since the 1960s, the finite element method has mainly been used for modelling the mechanics... formulation provides higher accuracy 2 for displacement solutions. It is well known that the linear finite element formulation suffers from sensitivity to element aspect ratio or shear locking when subjected to bend- ing [16]. Fallah [8] and Wheel [6] present...
Solution of second order linear fuzzy difference equation by Lagrange's multiplier method
Sankar Prasad Mondal
2016-06-01
Full Text Available In this paper we execute the solution procedure for second order linear fuzzy difference equation by Lagrange's multiplier method. In crisp sense the difference equation are easy to solve, but when we take in fuzzy sense it forms a system of difference equation which is not so easy to solve. By the help of Lagrange's multiplier we can solved it easily. The results are illustrated by two different numerical examples and followed by two applications.
Linear motion device and method for inserting and withdrawing control rods
Smith, J.E.
Disclosed is a linear motion device and more specifically a control rod drive mechanism (CRDM) for inserting and withdrawing control rods into a reactor core. The CRDM and method disclosed is capable of independently and sequentially positioning two sets of control rods with a single motor stator and rotor. The CRDM disclosed can control more than one control rod lead screw without incurring a substantial increase in the size of the mechanism.
On the economical solution method for a system of linear algebraic equations
Jan Awrejcewicz
2004-01-01
Full Text Available The present work proposes a novel optimal and exact method of solving large systems of linear algebraic equations. In the approach under consideration, the solution of a system of algebraic linear equations is found as a point of intersection of hyperplanes, which needs a minimal amount of computer operating storage. Two examples are given. In the first example, the boundary value problem for a three-dimensional stationary heat transfer equation in a parallelepiped in ℝ3 is considered, where boundary value problems of first, second, or third order, or their combinations, are taken into account. The governing differential equations are reduced to algebraic ones with the help of the finite element and boundary element methods for different meshes applied. The obtained results are compared with known analytical solutions. The second example concerns computation of a nonhomogeneous shallow physically and geometrically nonlinear shell subject to transversal uniformly distributed load. The partial differential equations are reduced to a system of nonlinear algebraic equations with the error of O(hx12+hx22. The linearization process is realized through either Newton method or differentiation with respect to a parameter. In consequence, the relations of the boundary condition variations along the shell side and the conditions for the solution matching are reported.
Franklin, Timothy C; Granata, Kevin P; Madigan, Michael L; Hendricks, Scott L
2008-08-01
Linear stability methods were applied to a biomechanical model of the human musculoskeletal spine to investigate effects of reflex gain and reflex delay on stability. Equations of motion represented a dynamic 18 degrees-of-freedom rigid-body model with time-delayed reflexes. Optimal muscle activation levels were identified by minimizing metabolic power with the constraints of equilibrium and stability with zero reflex time delay. Muscle activation levels and associated muscle forces were used to find the delay margin, i.e., the maximum reflex delay for which the system was stable. Results demonstrated that stiffness due to antagonistic co-contraction necessary for stability declined with increased proportional reflex gain. Reflex delay limited the maximum acceptable proportional reflex gain, i.e., long reflex delay required smaller maximum reflex gain to avoid instability. As differential reflex gain increased, there was a small increase in acceptable reflex delay. However, differential reflex gain with values near intrinsic damping caused the delay margin to approach zero. Forward-dynamic simulations of the fully nonlinear time-delayed system verified the linear results. The linear methods accurately found the delay margin below which the nonlinear system was asymptotically stable. These methods may aid future investigations in the role of reflexes in musculoskeletal stability.
An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem
Meriem Ait Mehdi
2014-01-01
Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.
American Society for Testing and Materials. Philadelphia
1995-01-01
1.1 This test method covers the interferometric determination of linear thermal expansion of premelted glaze frits and fired ceramic whiteware materials at temperatures lower than 1000°C (1830°F). 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Massimiliano Ferraioli
2016-01-01
Full Text Available Although the most commonly used isolation systems exhibit nonlinear inelastic behaviour, the equivalent linear elastic analysis is commonly used in the design and assessment of seismic-isolated structures. The paper investigates if the linear elastic model is suitable for the analysis of a seismically isolated multiple building structure. To this aim, its computed responses were compared with those calculated by nonlinear dynamic analysis. A common base isolation plane connects the isolation bearings supporting the adjacent structures. In this situation, the conventional equivalent linear elastic analysis may have some problems of accuracy because this method is calibrated on single base-isolated structures. Moreover, the torsional characteristics of the combined system are significantly different from those of separate isolated buildings. A number of numerical simulations and parametric studies under earthquake excitations were performed. The accuracy of the dynamic response obtained by the equivalent linear elastic model was calculated by the magnitude of the error with respect to the corresponding response considering the nonlinear behaviour of the isolation system. The maximum displacements at the isolation level, the maximum interstorey drifts, and the peak absolute acceleration were selected as the most important response measures. The influence of mass eccentricity, torsion, and high-modes effects was finally investigated.
An Improved Isotropic Periodic Sum Method That Uses Linear Combinations of Basis Potentials
Takahashi, Kazuaki Z.
2012-11-13
Isotropic periodic sum (IPS) is a technique that calculates long-range interactions differently than conventional lattice sum methods. The difference between IPS and lattice sum methods lies in the shape and distribution of remote images for long-range interaction calculations. The images used in lattice sum calculations are identical to those generated from periodic boundary conditions and are discretely positioned at lattice points in space. The images for IPS calculations are "imaginary", which means they do not explicitly exist in a simulation system and are distributed isotropically and periodically around each particle. Two different versions of the original IPS method exist. The IPSn method is applied to calculations for point charges, whereas the IPSp method calculates polar molecules. However, both IPSn and IPSp have their advantages and disadvantages in simulating bulk water or water-vapor interfacial systems. In bulk water systems, the cutoff radius effect of IPSn strongly affects the configuration, whereas IPSp does not provide adequate estimations of water-vapor interfacial systems unless very long cutoff radii are used. To extend the applicability of the IPS technique, an improved IPS method, which has better accuracy in both homogeneous and heterogeneous systems has been developed and named the linear-combination-based isotropic periodic sum (LIPS) method. This improved IPS method uses linear combinations of basis potentials. We performed molecular dynamics (MD) simulations of bulk water and water-vapor interfacial systems to evaluate the accuracy of the LIPS method. For bulk water systems, the LIPS method has better accuracy than IPSn in estimating thermodynamic and configurational properties without the countercharge assumption, which is used for IPSp. For water-vapor interfacial systems, LIPS has better accuracy than IPSp and properly estimates thermodynamic and configurational properties. In conclusion, the LIPS method can successfully estimate
An Improved Isotropic Periodic Sum Method That Uses Linear Combinations of Basis Potentials
Takahashi, Kazuaki Z.; Narumi, Tetsu; Suh, Donguk; Yasuoka, Kenji
2012-01-01
Isotropic periodic sum (IPS) is a technique that calculates long-range interactions differently than conventional lattice sum methods. The difference between IPS and lattice sum methods lies in the shape and distribution of remote images for long-range interaction calculations. The images used in lattice sum calculations are identical to those generated from periodic boundary conditions and are discretely positioned at lattice points in space. The images for IPS calculations are "imaginary", which means they do not explicitly exist in a simulation system and are distributed isotropically and periodically around each particle. Two different versions of the original IPS method exist. The IPSn method is applied to calculations for point charges, whereas the IPSp method calculates polar molecules. However, both IPSn and IPSp have their advantages and disadvantages in simulating bulk water or water-vapor interfacial systems. In bulk water systems, the cutoff radius effect of IPSn strongly affects the configuration, whereas IPSp does not provide adequate estimations of water-vapor interfacial systems unless very long cutoff radii are used. To extend the applicability of the IPS technique, an improved IPS method, which has better accuracy in both homogeneous and heterogeneous systems has been developed and named the linear-combination-based isotropic periodic sum (LIPS) method. This improved IPS method uses linear combinations of basis potentials. We performed molecular dynamics (MD) simulations of bulk water and water-vapor interfacial systems to evaluate the accuracy of the LIPS method. For bulk water systems, the LIPS method has better accuracy than IPSn in estimating thermodynamic and configurational properties without the countercharge assumption, which is used for IPSp. For water-vapor interfacial systems, LIPS has better accuracy than IPSp and properly estimates thermodynamic and configurational properties. In conclusion, the LIPS method can successfully estimate
One step linear reconstruction method for continuous wave diffuse optical tomography
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Taylor, Ellen Meredith
Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The
The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations
Ashour-Abdalla, Maha
2011-01-01
To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.
Flexible non-linear predictive models for large-scale wind turbine diagnostics
Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole
2017-01-01
We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....
Study of vibrations and stabilization of linear collider final doublets at the sub-nanometer scale
Bolzon, B.
2007-11-01
CLIC is one of the current projects of high energy linear colliders. Vertical beam sizes of 0.7 nm at the time of the collision and fast ground motion of a few nanometers impose an active stabilization of the final doublets at a fifth of nanometer above 4 Hz. The majority of my work concerned vibrations and active stabilization study of cantilever and slim beams in order to be representative of the final doublets of CLIC. In a first part, measured performances of different types of vibration sensors associated to an appropriate instrumentation showed that accurate measurements of ground motion are possible from 0.1 Hz up to 2000 Hz on a quiet site. Also, electrochemical sensors answering a priori the specifications of CLIC can be incorporated in the active stabilization at a fifth of nanometer. In a second part, an experimental and numerical study of beam vibrations enabled to validate the efficiency of the numerical prediction incorporated then in the simulation of the active stabilization. Also, a study of the impact of ground motion and of acoustic noise on beam vibrations showed that an active stabilization is necessary at least up to 1000 Hz. In a third part, results on the active stabilization of a beam at its two first resonances are shown down to amplitudes of a tenth of nanometer above 4 Hz by using in parallel a commercial system performing passive and active stabilization of the clamping. The last part is related to a study of a support for the final doublets of a linear collider prototype in phase of finalization, the ATF2 prototype. This work showed that relative motion between this support and the ground is below imposed tolerances (6 nm above 0.1 Hz) with appropriate boundary conditions. (author)
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
Williams, Mark L.; Rearden, Bradley T.
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
A Robust Non-Gaussian Data Assimilation Method for Highly Non-Linear Models
Elias D. Nino-Ruiz
2018-03-01
Full Text Available In this paper, we propose an efficient EnKF implementation for non-Gaussian data assimilation based on Gaussian Mixture Models and Markov-Chain-Monte-Carlo (MCMC methods. The proposed method works as follows: based on an ensemble of model realizations, prior errors are estimated via a Gaussian Mixture density whose parameters are approximated by means of an Expectation Maximization method. Then, by using an iterative method, observation operators are linearized about current solutions and posterior modes are estimated via a MCMC implementation. The acceptance/rejection criterion is similar to that of the Metropolis-Hastings rule. Experimental tests are performed on the Lorenz 96 model. The results show that the proposed method can decrease prior errors by several order of magnitudes in a root-mean-square-error sense for nearly sparse or dense observational networks.
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Method for the mechanical axis alignment of the linear induction accelerator
Li Hong; China Academy of Engineering Physics, Mianyang; Yao Jin; Liu Yunlong; Zhang Linwen; Deng Jianjun
2004-01-01
Accurate mechanical axis alignment is a basic requirement for assembling a linear induction accelerator (LIA). The total length of an LIA is usually over thirty or fifty meters, and it consists of many induction cells. By using a laser tracker a new method of mechanical axis alignment for LIA is established to achieve the high accuracy. This paper introduces the method and gives implementation step and point position measure errors of the mechanical axis alignment. During the alignment process a 55 m-long alignment control survey net is built, and the theoretic revision of the coordinate of the control survey net is presented. (authors)
Reproducing kernel method with Taylor expansion for linear Volterra integro-differential equations
Azizallah Alvandi
2017-06-01
Full Text Available This research aims of the present a new and single algorithm for linear integro-differential equations (LIDE. To apply the reproducing Hilbert kernel method, there is made an equivalent transformation by using Taylor series for solving LIDEs. Shown in series form is the analytical solution in the reproducing kernel space and the approximate solution $ u_{N} $ is constructed by truncating the series to $ N $ terms. It is easy to prove the convergence of $ u_{N} $ to the analytical solution. The numerical solutions from the proposed method indicate that this approach can be implemented easily which shows attractive features.
Nagy, D.L.; Dengler, J.; Ritter, G.
1988-01-01
A model-independent evaluation of the components of poorly resolved Moessbauer spectra based on a linear combination method is possible if there is a parameter as a function of which the shape of the individual components do not but their intensities do change and the dependence of the intensities on this parameter is known. The efficiency of the method is demonstrated on the example of low temperature magnetically split spectra of the high-T c superconductor YBa 2 (Cu 0.9 Fe 0 .1 ) 3 O 7-y . (author)
Linear dynamic analysis of arbitrary thin shells modal superposition by using finite element method
Goncalves Filho, O.J.A.
1978-11-01
The linear dynamic behaviour of arbitrary thin shells by the Finite Element Method is studied. Plane triangular elements with eighteen degrees of freedom each are used. The general equations of movement are obtained from the Hamilton Principle and solved by the Modal Superposition Method. The presence of a viscous type damping can be considered by means of percentages of the critical damping. An automatic computer program was developed to provide the vibratory properties and the dynamic response to several types of deterministic loadings, including temperature effects. The program was written in FORTRAN IV for the Burroughs B-6700 computer. (author)
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
A linear complementarity method for the solution of vertical vehicle-track interaction
Zhang, Jian; Gao, Qiang; Wu, Feng; Zhong, Wan-Xie
2018-02-01
A new method is proposed for the solution of the vertical vehicle-track interaction including a separation between wheel and rail. The vehicle is modelled as a multi-body system using rigid bodies, and the track is treated as a three-layer beam model in which the rail is considered as an Euler-Bernoulli beam and both the sleepers and the ballast are represented by lumped masses. A linear complementarity formulation is directly established using a combination of the wheel-rail normal contact condition and the generalised-α method. This linear complementarity problem is solved using the Lemke algorithm, and the wheel-rail contact force can be obtained. Then the dynamic responses of the vehicle and the track are solved without iteration based on the generalised-α method. The same equations of motion for the vehicle and track are adopted at the different wheel-rail contact situations. This method can remove some restrictions, that is, time-dependent mass, damping and stiffness matrices of the coupled system, multiple equations of motion for the different contact situations and the effect of the contact stiffness. Numerical results demonstrate that the proposed method is effective for simulating the vehicle-track interaction including a separation between wheel and rail.
Linear perturbation theory for tidal streams and the small-scale CDM power spectrum
Bovy, Jo; Erkal, Denis; Sanders, Jason L.
2017-04-01
Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r matter is clumpy on the smallest scales relevant for galaxy formation.
Shi Jun
2015-02-01
Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.
A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA
W. Liu
2016-06-01
Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.
Theoretical explanation of present mirror experiments and linear stability of larger scaled machines
Berk, H.L.; Baldwin, D.E.; Cutler, T.A.; Lodestro, L.L.; Maron, N.; Pearlstein, L.D.; Rognlien, T.D.; Stewart, J.J.; Watson, D.C.
1976-01-01
A quasilinear model for the evolution of the 2XIIB mirror experiment is presented and shown to reproduce the time evolution of the experiment. From quasilinear theory it follows that the energy lifetime is the Spitzer electron drag time for T/sub e/ approximately less than 0.1T/sub i/. By computing the stability boundary of the DCLC mode, with warm plasma stabilization, the electron temperature is predicted as a function of radial scale length. In addition, the effect of finite length corrections to the Alfven cyclotron mode is assessed
Comments on the comparison of global methods for linear two-point boundary value problems
de Boor, C.; Swartz, B.
1977-01-01
A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials
Pilipchuk, L. A.; Pilipchuk, A. S.
2015-01-01
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure
A Fast Condensing Method for Solution of Linear-Quadratic Control Problems
Frison, Gianluca; Jørgensen, John Bagterp
2013-01-01
consider a condensing (or state elimination) method to solve an extended version of the LQ control problem, and we show how to exploit the structure of this problem to both factorize the dense Hessian matrix and solve the system. Furthermore, we present two efficient implementations. The first......In both Active-Set (AS) and Interior-Point (IP) algorithms for Model Predictive Control (MPC), sub-problems in the form of linear-quadratic (LQ) control problems need to be solved at each iteration. The solution of these sub-problems is usually the main computational effort. In this paper we...... implementation is formally identical to the Riccati recursion based solver and has a computational complexity that is linear in the control horizon length and cubic in the number of states. The second implementation has a computational complexity that is quadratic in the control horizon length as well...
Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)
2015-11-30
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.
KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method
Westley, G.W.
1975-01-01
1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user
Sensitivity-based virtual fields for the non-linear virtual fields method
Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice
2017-09-01
The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.
Scaling behavior of ground-state energy cluster expansion for linear polyenes
Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.
Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.
D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar
2012-01-01
and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass...
Sukhpreet Kaur Sidhu
2014-01-01
Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.
Barari, Amin; Ganjavi, B.; Jeloudar, M. Ghanbari
2010-01-01
and fluid mechanics. Design/methodology/approach – Two new but powerful analytical methods, namely, He's VIM and HPM, are introduced to solve some boundary value problems in structural engineering and fluid mechanics. Findings – Analytical solutions often fit under classical perturbation methods. However......, as with other analytical techniques, certain limitations restrict the wide application of perturbation methods, most important of which is the dependence of these methods on the existence of a small parameter in the equation. Disappointingly, the majority of nonlinear problems have no small parameter at all......Purpose – In the last two decades with the rapid development of nonlinear science, there has appeared ever-increasing interest of scientists and engineers in the analytical techniques for nonlinear problems. This paper considers linear and nonlinear systems that are not only regarded as general...
Nurbaiti
2017-03-01
Full Text Available Science and technology have been rapidly evolved in some fields of knowledge, including mathematics. Such development can contribute to improvements on the learning process that encourage students and teachers to enhance their abilities and performances. In delivering the material on the linear equation system with two variables (SPLDV, the conventional teaching method where teachers become the center of the learning process is still well-practiced. This method would cause the students get bored and have difficulties to understand the concepts they are learning. Therefore, in order to the learning of SPLDV easy, an interesting, interactive media that the students and teachers can apply is necessary. This media is designed using GUI MATLAB and named as students’ electronic worksheets (e-LKS. This program is intended to help students in finding and understanding the SPLDV concepts more easily. This program is also expected to improve students’ motivation and creativity in learning the material. Based on the test using the System Usability Scale (SUS, the design of interactive mathematics learning media of the linear equation system with Two Variables (SPLDV gets grade B (excellent, meaning that this learning media is proper to be used for Junior High School students of grade VIII.
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.
2016-05-01
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the
Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants
Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun
2011-01-01
The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant
da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues
2015-01-01
This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.
APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP
Monalisha Pattnaik
2014-12-01
Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights.
Sergienko, I.V.; Golodnikov, A.N.
1984-01-01
This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints
Strength and reversibility of stereotypes for a rotary control with linear scales.
Chan, Alan H S; Chan, W H
2008-02-01
Using real mechanical controls, this experiment studied strength and reversibility of direction-of-motion stereotypes and response times for a rotary control with horizontal and vertical scales. Thirty-eight engineering undergraduates (34 men and 4 women) ages 23 to 47 years (M=29.8, SD=7.7) took part in the experiment voluntarily. The effects of instruction of change of pointer position and control plane on movement compatibility were analyzed with precise quantitative measures of strength and a reversibility index of stereotype. Comparisons of the strength and reversibility values of these two configurations with those of rotary control-circular display, rotary control-digital counter, four-way lever-circular display, and four-way lever-digital counter were made. The results of this study provided significant implications for the industrial design of control panels for improved human performance.
Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis
Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong
2000-09-01
This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of
Quantifying feedforward control: a linear scaling model for fingertip forces and object weight.
Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath; Raghavan, Preeti
2015-07-01
The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. Copyright © 2015 the American Physiological Society.
Test equating, scaling, and linking methods and practices
Kolen, Michael J
2014-01-01
This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals. In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably. Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...
Scale factor measure method without turntable for angular rate gyroscope
Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua
2018-03-01
In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.
Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin
2015-02-09
The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Multiple time-scale methods in particle simulations of plasmas
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
A review of model predictive control: moving from linear to nonlinear design methods
Nandong, J.; Samyudia, Y.; Tade, M.O.
2006-01-01
Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC
V. S. Zarubin
2015-01-01
Full Text Available The rational use of composites as structural materials, while perceiving the thermal and mechanical loads, to a large extent determined by their thermoelastic properties. From the presented review of works devoted to the analysis of thermoelastic characteristics of composites, it follows that the problem of estimating these characteristics is important. Among the thermoelastic properties of composites occupies an important place its temperature coefficient of linear expansion.Along with fiber composites are widely used in the technique of dispersion hardening composites, in which the role of inclusions carry particles of high-strength and high-modulus materials, including nanostructured elements. Typically, the dispersed particles have similar dimensions in all directions, which allows the shape of the particles in the first approximation the ball.In an article for the composite with isotropic spherical inclusions of a plurality of different materials by the self-produced design formulas relating the temperature coefficient of linear expansion with volume concentration of inclusions and their thermoelastic characteristics, as well as the thermoelastic properties of the matrix of the composite. Feature of the method is the self-accountability thermomechanical interaction of a single inclusion or matrix particles with a homogeneous isotropic medium having the desired temperature coefficient of linear expansion. Averaging over the volume of the composite arising from such interaction perturbation strain and stress in the inclusions and the matrix particles and makes it possible to obtain such calculation formulas.For the validation of the results of calculations of the temperature coefficient of linear expansion of the composite of this type used two-sided estimates that are based on the dual variational formulation of linear thermoelasticity problem in an inhomogeneous solid containing two alternative functional (such as Lagrange and Castigliano
Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh
2012-03-22
The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.
Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay
2014-05-01
In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the
Kang, Bongmun; Yoon, Ho-Sung
2015-02-01
Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.
The Language Teaching Methods Scale: Reliability and Validity Studies
Okmen, Burcu; Kilic, Abdurrahman
2016-01-01
The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
Correlates of the Rosenberg Self-Esteem Scale Method Effects
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
Naturalness in low-scale SUSY models and "non-linear" MSSM
Antoniadis, I; Ghilencea, D M
2014-01-01
In MSSM models with various boundary conditions for the soft breaking terms (m_{soft}) and for a higgs mass of 126 GeV, there is a (minimal) electroweak fine-tuning Delta\\approx 800 to 1000 for the constrained MSSM and Delta\\approx 500 for non-universal gaugino masses. These values, often regarded as unacceptably large, may indicate a problem of supersymmetry (SUSY) breaking, rather than of SUSY itself. A minimal modification of these models is to lower the SUSY breaking scale in the hidden sector (\\sqrt f) to few TeV, which we show to restore naturalness to more acceptable levels Delta\\approx 80 for the most conservative case of low tan_beta and ultraviolet boundary conditions as in the constrained MSSM. This is done without introducing additional fields in the visible sector, unlike other models that attempt to reduce Delta. In the present case Delta is reduced due to additional (effective) quartic higgs couplings proportional to the ratio m_{soft}/(\\sqrt f) of the visible to the hidden sector SUSY breaking...
The large-scale gravitational bias from the quasi-linear regime.
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by _c_=C_p,q_ ^p+q-2^ where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
A New Class of Non-Linear, Finite-Volume Methods for Vlasov Simulation
Banks, J.W.; Hittinger, J.A.
2010-01-01
Methods for the numerical discretization of the Vlasov equation should efficiently use the phase space discretization and should introduce only enough numerical dissipation to promote stability and control oscillations. A new high-order, non-linear, finite-volume algorithm for the Vlasov equation that discretely conserves particle number and controls oscillations is presented. The method is fourth-order in space and time in well-resolved regions, but smoothly reduces to a third-order upwind scheme as features become poorly resolved. The new scheme is applied to several standard problems for the Vlasov-Poisson system, and the results are compared with those from other finite-volume approaches, including an artificial viscosity scheme and the Piecewise Parabolic Method. It is shown that the new scheme is able to control oscillations while preserving a higher degree of fidelity of the solution than the other approaches.
Chang Liyun; Ho, S.-Y.; Du, Y.-C.; Lin, C.-M.; Chen Tainsong
2007-01-01
The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11 deg. compared with the digital indicator, and the standard deviation was within 0.05 deg. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Adi, Wisnu Ari; Sukirman, Engkir; Winatapura, Didin S.
2000-01-01
Technique of critical current density measurement (Jc) of HTc bulk ceramic superconductor has been performed by using linear extrapolation with four-point probes method. The measurement of critical current density HTc bulk ceramic superconductor usually causes damage in contact resistance. In order to decrease this damage factor, we introduce extrapolation method. The extrapolating data show that the critical current density Jc for YBCO (123) and BSCCO (2212) at 77 K are 10,85(6) Amp.cm - 2 and 14,46(6) Amp.cm - 2, respectively. This technique is easier, simpler, and the use of the current flow is low, so it will not damage the contact resistance of the sample. We expect that the method can give a better solution for bulk superconductor application. Key words. : superconductor, critical temperature, and critical current density
Ling Zhang
2017-10-01
Full Text Available Abstract The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs. It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order 1 2 $\\frac{1}{2}$ to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Standard test method for linear-elastic plane-strain fracture toughness KIc of metallic materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers the determination of fracture toughness (KIc) of metallic materials under predominantly linear-elastic, plane-strain conditions using fatigue precracked specimens having a thickness of 1.6 mm (0.063 in.) or greater subjected to slowly, or in special (elective) cases rapidly, increasing crack-displacement force. Details of test apparatus, specimen configuration, and experimental procedure are given in the Annexes. Note 1—Plane-strain fracture toughness tests of thinner materials that are sufficiently brittle (see 7.1) can be made using other types of specimens (1). There is no standard test method for such thin materials. 1.2 This test method is divided into two parts. The first part gives general recommendations and requirements for KIc testing. The second part consists of Annexes that give specific information on displacement gage and loading fixture design, special requirements for individual specimen configurations, and detailed procedures for fatigue precracking. Additional a...
Kim, Jin Kyu; Kim, Dong Keon
2016-01-01
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics
Integrated structural analysis tool using the linear matching method part 1 – Software development
Ure, James; Chen, Haofeng; Tipping, David
2014-01-01
A number of direct methods based upon the Linear Matching Method (LMM) framework have been developed to address structural integrity issues for components subjected to cyclic thermal and mechanical load conditions. This paper presents a new integrated structural analysis tool using the LMM framework for the assessment of load carrying capacity, shakedown limit, ratchet limit and steady state cyclic response of structures. First, the development of the LMM for the evaluation of design limits in plasticity is introduced. Second, preliminary considerations for the development of the LMM into a tool which can be used on a regular basis by engineers are discussed. After the re-structuring of the LMM subroutines for multiple central processing unit (CPU) solution, the LMM software tool for the assessment of design limits in plasticity is implemented by developing an Abaqus CAE plug-in with graphical user interfaces. Further demonstration of this new LMM analysis tool including practical application and verification is presented in an accompanying paper. - Highlights: • A new structural analysis tool using the Linear Matching Method (LMM) is developed. • The software tool is able to evaluate the design limits in plasticity. • Able to assess limit load, shakedown, ratchet limit and steady state cyclic response. • Re-structuring of the LMM subroutines for multiple CPU solution is conducted. • The software tool is implemented by developing an Abaqus CAE plug-in with GUI
Deriving the probability of a linear opinion pooling method being superior to a set of alternatives
Bolger, Donnacha; Houlding, Brett
2017-01-01
Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.
Arbitrary Lagrangian-Eulerian method for non-linear problems of geomechanics
Nazem, M; Carter, J P; Airey, D W
2010-01-01
In many geotechnical problems it is vital to consider the geometrical non-linearity caused by large deformation in order to capture a more realistic model of the true behaviour. The solutions so obtained should then be more accurate and reliable, which should ultimately lead to cheaper and safer design. The Arbitrary Lagrangian-Eulerian (ALE) method originated from fluid mechanics, but has now been well established for solving large deformation problems in geomechanics. This paper provides an overview of the ALE method and its challenges in tackling problems involving non-linearities due to material behaviour, large deformation, changing boundary conditions and time-dependency, including material rate effects and inertia effects in dynamic loading applications. Important aspects of ALE implementation into a finite element framework will also be discussed. This method is then employed to solve some interesting and challenging geotechnical problems such as the dynamic bearing capacity of footings on soft soils, consolidation of a soil layer under a footing, and the modelling of dynamic penetration of objects into soil layers.
Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)
2016-09-15
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.
Reactor Network Synthesis Using Coupled Genetic Algorithm with the Quasi-linear Programming Method
Soltani, H.; Shafiei, S.; Edraki, J.
2016-01-01
This research is an attempt to develop a new procedure for the synthesis of reactor networks (RNs) using a genetic algorithm (GA) coupled with the quasi-linear programming (LP) method. The GA is used to produce structural configuration, whereas continuous variables are handled using a quasi-LP formulation for finding the best objective function. Quasi-LP consists of LP together with a search loop to find the best reactor conversions (xi), as well as split and recycle ratios (yi). Quasi-LP rep...
Detection methods of pulsed X-rays for transmission tomography with a linear accelerator
Glasser, F.
1988-07-01
Appropriate detection methods are studied for the development of a high energy tomograph using a linear accelerator for nondestructive testing of bulky objects. The aim is the selection of detectors adapted to a pulsed X-ray source and with a good behavior under X-ray radiations of several MeV. Performance of semiconductors (HgI 2 , Cl doped CdTe, GaAs, Bi 12 Ge0 20 ) and a scintillator (Bi 4 Ge 3 0 12 ) are examined. A prototype tomograph gave images that show the validity of detectors for analysis of medium size equipment such as a concrete drum of 60 cm in diameter [fr
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
The evaluation of multi-element personal dosemeters using the linear programming method
Kragh, P.; Ambrosi, P.; Boehm, J.; Hilgers, G.
1996-01-01
Multi-element dosemeters are frequently used in individual monitoring. Each element can be regarded as an individual dosemeter with its own individual dose measurement value. In general, the individual dose values of one dosemeter vary according to the exposure conditions, i. e. the energy and angle of incidence of the radiation. The (final) dose measurement value of the personal dosemeter is calculated from the individual dose values by means of an evaluation algorithm. The best possible dose value, i.e. that of the smallest systematic (type B) uncertainty if the exposure conditions are changed in the dosemeter's rated range of use, is obtained by the method of linear programming. (author)
Faster Simulation Methods for the Non-Stationary Random Vibrations of Non-Linear MDOF Systems
Askar, A.; Köylüoglu, H. U.; Nielsen, Søren R. K.
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
Faster Simulation Methods for the Nonstationary Random Vibrations of Non-linear MDOF Systems
Askar, A.; Köylüo, U.; Nielsen, Søren R.K.
1996-01-01
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
A review of methods for experimentally determining linear optics in storage rings
Safranek, J.
1995-01-01
In order to maximize the brightness and provide sufficient dynamic aperture in synchrotron radiation storage rings, one must understand and control the linear optics. Control of the horizontal beta function and dispersion is important for minimizing the horizontal beam size. Control of the skew gradient distribution is important for minimizing the vertical size. In this paper, various methods for experimentally determining the optics in a storage ring will be reviewed. Recent work at the National Synchrotron Light Source X-Ray Ring will be presented as well as work done at laboratories worldwide
Renormalized trajectory for non-linear sigma model and improved scaling behaviour
Guha, A.; Okawa, M.; Zuber, J.B.
1984-01-01
We apply the block-spin renormalization group method to the O(N) Heisenberg spin model. Extending a previous work of Hirsch and Shenker, we find the renormalized trajectory for O(infinite) in two dimensions. Four finite N models, we choose a four-parameter action near the large-N renormalized trajectory and demonstrate a remarkable improvement in the approach to continuum limit by performing Monte Carlo simulation of O(3) and O(4) models. (orig.)
Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.
Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.
1997-08-01
A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.
Clemens, M.; Weiland, T. [Technische Hochschule Darmstadt (Germany)
1996-12-31
In the field of computational electrodynamics the discretization of Maxwell`s equations using the Finite Integration Theory (FIT) yields very large, sparse, complex symmetric linear systems of equations. For this class of complex non-Hermitian systems a number of conjugate gradient-type algorithms is considered. The complex version of the biconjugate gradient (BiCG) method by Jacobs can be extended to a whole class of methods for complex-symmetric algorithms SCBiCG(T, n), which only require one matrix vector multiplication per iteration step. In this class the well-known conjugate orthogonal conjugate gradient (COCG) method for complex-symmetric systems corresponds to the case n = 0. The case n = 1 yields the BiCGCR method which corresponds to the conjugate residual algorithm for the real-valued case. These methods in combination with a minimal residual smoothing process are applied separately to practical 3D electro-quasistatical and eddy-current problems in electrodynamics. The practical performance of the SCBiCG methods is compared with other methods such as QMR and TFQMR.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons
Chunyan Han
2015-01-01
Full Text Available Based on the heteroclinic Shil’nikov theorem and switching control, a kind of multipiecewise linear chaotic system is constructed in this paper. Firstly, two fundamental linear systems are constructed via linearization of a chaotic system at its two equilibrium points. Secondly, a two-piecewise linear chaotic system which satisfies the Shil’nikov theorem is generated by constructing heteroclinic loop between equilibrium points of the two fundamental systems by switching control. Finally, another multipiecewise linear chaotic system that also satisfies the Shil’nikov theorem is obtained via alternate translation of the two fundamental linear systems and heteroclinic loop construction of adjacent equilibria for the multipiecewise linear system. Some basic dynamical characteristics, including divergence, Lyapunov exponents, and bifurcation diagrams of the constructed systems, are analyzed. Meanwhile, computer simulation and circuit design are used for the proposed chaotic systems, and they are demonstrated to be effective for the method of chaos anticontrol.
Rosenbaum Peter L
2006-10-01
Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.
Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies
Gkotsi A
2012-11-01
Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
Chung, William
2012-01-01
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt
Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir
2018-04-01
Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Yager’s ranking method for solving the trapezoidal fuzzy number linear programming
Karyati; Wutsqa, D. U.; Insani, N.
2018-03-01
In the previous research, the authors have studied the fuzzy simplex method for trapezoidal fuzzy number linear programming based on the Maleki’s ranking function. We have found some theories related to the term conditions for the optimum solution of fuzzy simplex method, the fuzzy Big-M method, the fuzzy two-phase method, and the sensitivity analysis. In this research, we study about the fuzzy simplex method based on the other ranking function. It is called Yager's ranking function. In this case, we investigate the optimum term conditions. Based on the result of research, it is found that Yager’s ranking function is not like Maleki’s ranking function. Using the Yager’s function, the simplex method cannot work as well as when using the Maleki’s function. By using the Yager’s function, the value of the subtraction of two equal fuzzy numbers is not equal to zero. This condition makes the optimum table of the fuzzy simplex table is undetected. As a result, the simplified fuzzy simplex table becomes stopped and does not reach the optimum solution.
Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method
Younes Elahi
2014-01-01
Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.
Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method
De Waal, Sybrand A.
1996-07-01
A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.
A Dynamic Linear Hashing Method for Redundancy Management in Train Ethernet Consist Network
Xiaobo Nie
2016-01-01
Full Text Available Massive transportation systems like trains are considered critical systems because they use the communication network to control essential subsystems on board. Critical system requires zero recovery time when a failure occurs in a communication network. The newly published IEC62439-3 defines the high-availability seamless redundancy protocol, which fulfills this requirement and ensures no frame loss in the presence of an error. This paper adopts these for train Ethernet consist network. The challenge is management of the circulating frames, capable of dealing with real-time processing requirements, fast switching times, high throughout, and deterministic behavior. The main contribution of this paper is the in-depth analysis it makes of network parameters imposed by the application of the protocols to train control and monitoring system (TCMS and the redundant circulating frames discarding method based on a dynamic linear hashing, using the fastest method in order to resolve all the issues that are dealt with.
Application of the method of continued fractions for electron scattering by linear molecules
Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF
1995-01-01
The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)
A new formulation of the linear sampling method: spatial resolution and post-processing
Piana, M; Aramini, R; Brignone, M; Coyle, J
2008-01-01
A new formulation of the linear sampling method is described, which requires the regularized solution of a single functional equation set in a direct sum of L 2 spaces. This new approach presents the following notable advantages: it is computationally more effective than the traditional implementation, since time consuming samplings of the Tikhonov minimum problem and of the generalized discrepancy equation are avoided; it allows a quantitative estimate of the spatial resolution achievable by the method; it facilitates a post-processing procedure for the optimal selection of the scatterer profile by means of edge detection techniques. The formulation is described in a two-dimensional framework and in the case of obstacle scattering, although generalizations to three dimensions and penetrable inhomogeneities are straightforward
A novel method to design sparse linear arrays for ultrasonic phased array.
Yang, Ping; Chen, Bin; Shi, Ke-Ren
2006-12-22
In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.