A new adaptive GMRES algorithm for achieving high accuracy
Energy Technology Data Exchange (ETDEWEB)
Sosonkina, M.; Watson, L.T.; Kapania, R.K. [Virginia Polytechnic Inst., Blacksburg, VA (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition
De Sterck, Hans
2011-01-01
A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear g...
Some observations on weighted GMRES
Güttel, Stefan
2014-01-10
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.
Some observations on weighted GMRES
Gü ttel, Stefan; Pestana, Jennifer
2014-01-01
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.
Energy Technology Data Exchange (ETDEWEB)
Kelley, C.T.; Xue, Z.Q. [North Carolina State Univ., Raleigh, NC (United States)
1994-12-31
Many discretizations of integral equations and compact fixed point problems are collectively compact and strongly convergent in spaces of continuous functions. These properties not only lead to stable and convergent approximations but also can be used in the construction of fast multilevel algorithms. Recently the GMRES algorithm has become a standard coarse mesh solver. The purpose of this paper is to show how the special properties of integral operators and their approximations are reflected in the performance of the GMRES iteration and how these properties can be used to strengthen the norm in which convergence takes place. The authors illustrate these ideas with composite Gauss rules for integral equations on the unit interval.
Application of preconditioned GMRES to the numerical solution of the neutron transport equation
International Nuclear Information System (INIS)
Patton, B.W.; Holloway, J.P.
2002-01-01
The generalized minimal residual (GMRES) method with right preconditioning is examined as an alternative to both standard and accelerated transport sweeps for the iterative solution of the diamond differenced discrete ordinates neutron transport equation. Incomplete factorization (ILU) type preconditioners are used to determine their effectiveness in accelerating GMRES for this application. ILU(τ), which requires the specification of a dropping criteria τ, proves to be a good choice for the types of problems examined in this paper. The combination of ILU(τ) and GMRES is compared with both DSA and unaccelerated transport sweeps for several model problems. It is found that the computational workload of the ILU(τ)-GMRES combination scales nonlinearly with the number of energy groups and quadrature order, making this technique most effective for problems with a small number of groups and discrete ordinates. However, the cost of preconditioner construction can be amortized over several calculations with different source and/or boundary values. Preconditioners built upon standard transport sweep algorithms are also evaluated as to their effectiveness in accelerating the convergence of GMRES. These preconditioners show better scaling with such problem parameters as the scattering ratio, the number of discrete ordinates, and the number of spatial meshes. These sweeps based preconditioners can also be cast in a matrix free form that greatly reduces storage requirements
On Investigating GMRES Convergence using Unitary Matrices
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.; Sadok, H.; Strakoš, Z.
2014-01-01
Roč. 450, 1 June (2014), s. 83-107 ISSN 0024-3795 Grant - others:GA AV ČR(CZ) M100301201; GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : GMRES convergence * unitary matrices * unitary spectra * normal matrices * Krylov residual subspace * Schur parameters Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014
A New GMRES(m Method for Markov Chains
Directory of Open Access Journals (Sweden)
Bing-Yuan Pu
2013-01-01
Full Text Available This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
Iterative methods for solving Ax=b, GMRES/FOM versus QMR/BiCG
Energy Technology Data Exchange (ETDEWEB)
Cullum, J. [IBM Research Division, Yorktown Heights, NY (United States)
1996-12-31
We study the convergence of GMRES/FOM and QMR/BiCG methods for solving nonsymmetric Ax=b. We prove that given the results of a BiCG computation on Ax=b, we can obtain a matrix B with the same eigenvalues as A and a vector c such that the residual norms generated by a FOM computation on Bx=c are identical to those generated by the BiCG computations. Using a unitary equivalence for each of these methods, we obtain test problems where we can easily vary certain spectral properties of the matrices. We use these test problems to study the effects of nonnormality on the convergence of GMRES and QMR, to study the effects of eigenvalue outliers on the convergence of QMR, and to compare the convergence of restarted GMRES, QMR, and BiCGSTAB across a family of normal and nonnormal problems. Our GMRES tests on nonnormal test matrices indicate that nonnormality can have unexpected effects upon the residual norm convergence, giving misleading indications of superior convergence over QMR when the error norms for GMRES are not significantly different from those for QMR. Our QMR tests indicate that the convergence of the QMR residual and error norms is influenced predominantly by small and large eigenvalue outliers and by the character, real, complex, or nearly real, of the outliers and the other eigenvalues. In our comparison tests QMR outperformed GMRES(10) and GMRES(20) on both the normal and nonnormal test matrices.
Right-Hand Side Dependent Bounds for GMRES Applied to Ill-Posed Problems
Pestana, Jennifer
2014-01-01
© IFIP International Federation for Information Processing 2014. In this paper we apply simple GMRES bounds to the nearly singular systems that arise in ill-posed problems. Our bounds depend on the eigenvalues of the coefficient matrix, the right-hand side vector and the nonnormality of the system. The bounds show that GMRES residuals initially decrease, as residual components associated with large eigenvalues are reduced, after which semi-convergence can be expected because of the effects of small eigenvalues.
A block variant of the GMRES method on massively parallel processors
Energy Technology Data Exchange (ETDEWEB)
Li, Guangye [Cray Research, Inc., Eagan, MN (United States)
1996-12-31
This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.
Any Admissible Harmonic Ritz Value Set is Possible for GMRES
Czech Academy of Sciences Publication Activity Database
Du, K.; Duintjer Tebbens, Jurjen; Meurant, G.
2017-01-01
Roč. 47, September 18 (2017), s. 37-56 ISSN 1068-9613 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : Ritz values * harmonic Ritz values * GMRES convergence * prescribed residual norms * FOM convergence Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.925, year: 2016 http://etna.mcs.kent.edu/volumes/2011-2020/vol47/abstract.php?vol=47&pages=37-56
Properties of Worst-Case GMRES
Czech Academy of Sciences Publication Activity Database
Faber, V.; Liesen, J.; Tichý, Petr
2013-01-01
Roč. 34, č. 4 (2013), s. 1500-1519 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Grant - others:GA AV ČR(CZ) M10041090 Institutional support: RVO:67985807 Keywords : GMRES method * worst-case convergence * ideal GMRES * matrix approximation problems * minmax Subject RIV: BA - General Mathematics Impact factor: 1.806, year: 2013
Any Admissible Harmonic Ritz Value Set is Possible for GMRES
Czech Academy of Sciences Publication Activity Database
Du, K.; Duintjer Tebbens, Jurjen; Meurant, G.
2017-01-01
Roč. 47, September 18 (2017), s. 37-56 ISSN 1068-9613 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : Ritz value s * harmonic Ritz value s * GMRES convergence * prescribed residual norms * FOM convergence Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.925, year: 2016 http://etna.mcs.kent.edu/volumes/2011-2020/vol47/abstract.php?vol=47&pages=37-56
Minimal residual method stronger than polynomial preconditioning
Energy Technology Data Exchange (ETDEWEB)
Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.
2004-01-01
The method of characteristics is well known for its slow convergence; consequently, as it is often done for SN methods, the Generalized Minimal Residual approach (GMRES) has been investigated for its practical implementation and its high reliability. GMRES is one of the most effective Krylov iterative methods to solve large linear systems. Moreover, the system has been 'left preconditioned' with the Algebraic Collapsing Acceleration (ACA) a variant of the Diffusion Synthetic Acceleration (DSA) based on I. Suslov's former works. This paper presents the first numerical results of these methods in 2D geometries with material discontinuities. Indeed, previous investigations have shown a degraded effectiveness of Diffusion Synthetic Accelerations with this kind of geometries. Results are presented for 9 x 9 Cartesian assemblies in terms of the speed of convergence of the inner iterations (fixed source) of the method of characteristics. It shows a significant improvement on the convergence rate. (authors)
Effectiveness of various transport synthetic acceleration methods with and without GMRES
International Nuclear Information System (INIS)
Chang, J.H.; Adams, M.L.
2005-01-01
We explore the effectiveness of three types of transport synthetic acceleration (TSA) methods as stand-alone methods and as pre-conditioners within the GMRES Krylov solver. The three types are β TSA, 'stretched' TSA, and 'stretched and filtered' (SF) TSA. We analyzed the effectiveness of these algorithms using Fourier mode analysis of model two-dimensional problems with periodic boundary conditions, including problems with alternating layers of different materials. The analyses revealed that both β-TSA and stretched TSA can diverge for fairly weak heterogeneities. Performance of SF TSA, even with the optimum filtering parameter, degrades with heterogeneity. However, with GMRES, all TSA methods are convergent. SF TSA with the optimum filtering parameter was the most effective method. Numerical results support our Fourier mode analysis. (authors)
Prescribing the behavior of early terminating GMRES and Arnoldi iterations
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.
2014-01-01
Roč. 65, č. 1 (2014), s. 69-90 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA AV ČR(CZ) M100301201 Institutional research plan: CEZ:AV0Z10300504 Keywords : Arnoldi process * early termination * GMRES method * prescribed GMRES convergence * Arnoldi method * prescribed Ritz values Subject RIV: BA - General Mathematics Impact factor: 1.417, year: 2014
Deflation of Eigenvalues for GMRES in Lattice QCD
International Nuclear Information System (INIS)
Morgan, Ronald B.; Wilcox, Walter
2002-01-01
Versions of GMRES with deflation of eigenvalues are applied to lattice QCD problems. Approximate eigenvectors corresponding to the smallest eigenvalues are generated at the same time that linear equations are solved. The eigenvectors improve convergence for the linear equations, and they help solve other right-hand sides
Research on an efficient preconditioner using GMRES method for the MOC
International Nuclear Information System (INIS)
Takeda, Satoshi; Kitada, Takanori; Smith, Michael A.
2011-01-01
The modeling accuracy of reactor analysis techniques has improved considerably with the progressive improvements in computational capabilities. The method of characteristics (MOC) solves the neutron transport equation using tracking lines which simulates the neutron paths. The MOC is an accurate calculation method and is becoming a major solver because of the rapid advancement of the computer. In this methodology, the transport equation is discretized into many spatial meshes and energy wise groups. And the discretization generates a large system which needs a lot of computational costs. To reduce computational costs of MOC calculation, we investigate the Generalized Minimal RESidual (GMRES) method as an accelerator and developed an efficient preconditioner for the MOC calculation. The preconditioner we developed was made by simplifying rigorous preconditioner. And the efficiency was verified by comparing the number of iterations which is calculated by one dimensional MOC code
Nachtigal, Noel M.
1991-01-01
The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.
R3GMRES: including prior information in GMRES-type methods for discrete inverse problems
DEFF Research Database (Denmark)
Dong, Yiqiu; Garde, Henrik; Hansen, Per Christian
2014-01-01
Lothar Reichel and his collaborators proposed several iterative algorithms that augment the underlying Krylov subspace with an additional low-dimensional subspace in order to produce improved regularized solutions. We take a closer look at this approach and investigate a particular Regularized Ra...
The performances of R GPU implementations of the GMRES method
Directory of Open Access Journals (Sweden)
Bogdan Oancea
2018-03-01
Full Text Available Although the performance of commodity computers has improved drastically with the introduction of multicore processors and GPU computing, the standard R distribution is still based on single-threaded model of computation, using only a small fraction of the computational power available now for most desktops and laptops. Modern statistical software packages rely on high performance implementations of the linear algebra routines there are at the core of several important leading edge statistical methods. In this paper we present a GPU implementation of the GMRES iterative method for solving linear systems. We compare the performance of this implementation with a pure single threaded version of the CPU. We also investigate the performance of our implementation using different GPU packages available now for R such as gmatrix, gputools or gpuR which are based on CUDA or OpenCL frameworks.
Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida
2017-04-01
Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.
Quantum Algorithms for Weighing Matrices and Quadratic Residues
van Dam, Wim
2000-01-01
In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is ignificantly lower than the classical one. It is pointed out that this scheme captures both Bernstein & Vazirani's inner-product protocol, as well as Grover's search algorithm. In the second part of the ar...
Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems
Czech Academy of Sciences Publication Activity Database
Morikuni, Keiichi; Hayami, K.
2015-01-01
Roč. 36, č. 1 (2015), s. 225-250 ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.883, year: 2015
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong
2015-01-01
Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Any Ritz Value Behavior Is Possible for Arnoldi and for GMRES
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.
2012-01-01
Roč. 33, č. 3 (2012), s. 958-978 ISSN 0895-4798 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA AV ČR(CZ) M100300901 Institutional research plan: CEZ:AV0Z10300504 Keywords : Ritz values * Arnoldi process * Arnoldi method * GMRES method * prescribed convergence * interlacing properties Subject RIV: BA - General Mathematics Impact factor: 1.342, year: 2012
An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues
Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis
2011-01-01
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966
Directory of Open Access Journals (Sweden)
M. Susmikanti
2015-12-01
Full Text Available In a nuclear industry area, high temperature treatment of materials is a factor which requires special attention. Assessment needs to be conducted on the properties of the materials used, including the strength of the materials. The measurement of material properties under thermal processes may reflect residual stresses. The use of Genetic Algorithm (GA to determine the optimal residual stress is one way to determine the strength of a material. In residual stress modeling with several parameters, it is sometimes difficult to solve for the optimal value through analytical or numerical calculations. Here, GA is an efficient algorithm which can generate the optimal values, both minima and maxima. The purposes of this research are to obtain the optimization of variable in residual stress models using GA and to predict the center of residual stress distribution, using fuzzy neural network (FNN while the artificial neural network (ANN used for modeling. In this work a single-material 316/316L stainless steel bar is modeled. The minimal residual stresses of the material at high temperatures were obtained with GA and analytical calculations. At a temperature of 6500C, the GA optimal residual stress estimation converged at –711.3689 MPa at adistance of 0.002934 mm from center point, whereas the analytical calculation result at that temperature and position is -975.556 MPa . At a temperature of 8500C, the GA result was -969.868 MPa at 0.002757 mm from the center point, while with analytical result was -1061.13 MPa. The difference in residual stress between GA and analytical results at a temperatureof6500C is about 27 %, while at 8500C it is 8.67 %. The distribution of residual stress showed a grouping concentrated around a coordinate of (-76; 76 MPa. The residuals stress model is a degree-two polynomial with coefficients of 50.33, -76.54, and -55.2, respectively, with a standard deviation of 7.874.
Directory of Open Access Journals (Sweden)
Xiaoxia Yang
Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Tensor-GMRES method for large sparse systems of nonlinear equations
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
Multiple solutions to dense systems in radar scattering using a preconditioned block GMRES solver
Energy Technology Data Exchange (ETDEWEB)
Boyse, W.E. [Advanced Software Resources, Inc., Santa Clara, CA (United States)
1996-12-31
Multiple right-hand sides occur in radar scattering calculations in the computation of the simulated radar return from a body at a large number of angles. Each desired angle requires a right-hand side vector to be computed and the solution generated. These right-hand sides are naturally smooth functions of the angle parameters and this property is utilized in a novel way to compute solutions an order of magnitude faster than LINPACK The modeling technique addressed is the Method of Moments (MOM), i.e. a boundary element method for time harmonic Maxwell`s equations. Discretization by this method produces general complex dense systems of rank 100`s to 100,000`s. The usual way to produce the required multiple solutions is via LU factorization and solution routines such as found in LINPACK. Our method uses the block GMRES iterative method to directly iterate a subset of the desired solutions to convergence.
By how much can Residual Minimization Accelerate the Convergence of Orthogonal Residual Methods?
Czech Academy of Sciences Publication Activity Database
Gutknecht, M. H.; Rozložník, Miroslav
2001-01-01
Roč. 27, - (2001), s. 189-213 ISSN 1017-1398 R&D Projects: GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : system of linear algebraic equations * iterative method * Krylov space method * conjugate gradient method * biconjugate gradient method * CG * CGNE * CGNR * CGS * FOM * GMRes * QMR * TFQMR * residual smoothing * MR smoothing * QMR smoothing Subject RIV: BA - General Mathematics Impact factor: 0.438, year: 2001
Zhou, Xin; Jun, Sun; Zhang, Bing; Jun, Wu
2017-07-01
In order to improve the reliability of the spectrum feature extracted by wavelet transform, a method combining wavelet transform (WT) with bacterial colony chemotaxis algorithm and support vector machine (BCC-SVM) algorithm (WT-BCC-SVM) was proposed in this paper. Besides, we aimed to identify different kinds of pesticide residues on lettuce leaves in a novel and rapid non-destructive way by using fluorescence spectra technology. The fluorescence spectral data of 150 lettuce leaf samples of five different kinds of pesticide residues on the surface of lettuce were obtained using Cary Eclipse fluorescence spectrometer. Standard normalized variable detrending (SNV detrending), Savitzky-Golay coupled with Standard normalized variable detrending (SG-SNV detrending) were used to preprocess the raw spectra, respectively. Bacterial colony chemotaxis combined with support vector machine (BCC-SVM) and support vector machine (SVM) classification models were established based on full spectra (FS) and wavelet transform characteristics (WTC), respectively. Moreover, WTC were selected by WT. The results showed that the accuracy of training set, calibration set and the prediction set of the best optimal classification model (SG-SNV detrending-WT-BCC-SVM) were 100%, 98% and 93.33%, respectively. In addition, the results indicated that it was feasible to use WT-BCC-SVM to establish diagnostic model of different kinds of pesticide residues on lettuce leaves.
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.
2017-12-01
A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build
Directory of Open Access Journals (Sweden)
Byung Duk Song
2017-09-01
Full Text Available In the green manufacturing system that pursues the reuse of used products, the residual value of collected used products (CUP hugely affects a variety of managerial decisions to construct profitable and environmental remanufacturing plans. This paper deals with a closed-loop green manufacturing system for companies which perform both manufacturing with raw materials and remanufacturing with collected used products (CUP. The amount of CUP is assumed as a function of buy-back cost while the quality level of CUP, which means the residual value, follows a known distribution. In addition, the remanufacturing cost can differ according to the quality of the CUP. Moreover, nowadays companies are subject to existing environment-related laws such as Extended Producer Responsibility (EPR. Therefore, a company should collect more used products than its obligatory take-back quota or face fines from the government for not meeting its quota. Through the development of mathematical models, two kinds of inspection policies are examined to validate the efficiency of two different operation processes. To find a managerial solution, a genetic algorithm is proposed and tested with numerical examples.
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
International Nuclear Information System (INIS)
Wang Lincong; Donald, Bruce Randall
2004-01-01
We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics
International Nuclear Information System (INIS)
Cioffi, F.; Hidalgo, J.I.; Fernández, R.; Pirling, T.; Fernández, B.; Gesto, D.; Puente Orench, I.; Rey, P.; González-Doncel, G.
2014-01-01
Procedures based on equilibrium conditions (stress and bending moment) have been used to obtain an unstressed lattice spacing, d 0 , as a crucial requirement for calculating the residual stress (RS) profile across a joint conducted on a 10 mm thick plate of age-hardenable AA2024 alloy by friction stir welding (FSW). Two procedures have been used that take advantage of neutron diffraction measurements. First, equilibrium conditions were imposed on sections parallel to the weld so that a constant d 0 value corresponding to the base material region could be calculated analytically. Second, balance conditions were imposed on a section transverse to the weld. Then, using the data and a genetic algorithm, suitable d 0 values for the different regions of the weld have been found. For several reasons, the comb method has proved to be inappropriate for RS determination in the case of age-hardenable alloys. However, the equilibrium conditions, together with the genetic algorithm, has been shown to be very suitable for determining RS profiles in FSW joints of these alloys, where inherent microstructural variations of d 0 across the weld are expected
Energy Technology Data Exchange (ETDEWEB)
Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe, E-mail: yaqi.wang@inl.gov, E-mail: cristian.rabiti@inl.gov, E-mail: giuseppe.palmiotti@inl.gov [Idaho National Laboratory, Idaho Falls, ID (United States)
2011-07-01
This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)
International Nuclear Information System (INIS)
Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe
2011-01-01
This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)
International Nuclear Information System (INIS)
Sahotra, I.M.
2006-01-01
The principal effect of unloading a material strained into the plastic range is to create a permanent set (plastic deformation), which if restricted somehow, gives rise to a system of self-balancing within the same member or reaction balanced by other members of the structure., known as residual stresses. These stresses stay there as locked-in stresses, in the body or a part of it in the absence of any external loading. Residual stresses are induced during hot-rolling and welding differential cooling, cold-forming and extruding: cold straightening and spot heating, fabrication and forced fitting of components constraining the structure to a particular geometry. The areas which cool more quickly develop residual compressive stresses, while the slower cooling areas develop residual tensile stresses, and a self-balancing or reaction balanced system of residual stresses is formed. The phenomenon of residual stresses is the most challenging in its application in surface modification techniques determining endurance mechanism against fracture and fatigue failures. This paper discusses the mechanism of residual stresses, that how the residual stresses are fanned and what their behavior is under the action of external forces. Such as in the case of a circular bar under limit torque, rectangular beam under limt moment, reclaiming of shafts welds and peening etc. (author)
International Nuclear Information System (INIS)
Macherauch, E.
1978-01-01
Residual stresses are stresses which exist in a material without the influence of external powers and moments. They come into existence when the volume of a material constantly changes its form as a consequence of mechanical, thermal, and/or chemical processes and is hindered by neighbouring volumes. Bodies with residual stress are in mechanical balance. These residual stresses can be manifested by means of all mechanical interventions disturbing this balance. Acoustical, optical, radiological, and magnetical methods involving material changes caused by residual stress can also serve for determining residual stress. Residual stresses have an ambivalent character. In technical practice, they are feared and liked at the same time. They cause trouble because they can be the cause for unexpected behaviour of construction elements. They are feared since they can cause failure, in the worst case with catastrophical consequences. They are appreciated, on the other hand, because, in many cases, they can contribute to improvements of the material behaviour under certain circumstances. But they are especially liked for their giving convenient and (this is most important) mostly uncontrollable explanations. For only in very few cases we have enough knowledge and possibilities for the objective evaluation of residual stresses. (orig.) [de
International Nuclear Information System (INIS)
Mulder, E.; Duin, P.J. van; Grootenboer, G.J.
1995-01-01
A summary is presented of the many investigations that have been done on solid residues of atmospheric fluid bed combustion (AFBC). These residues are bed ash, cyclone ash and bag filter ash. Physical and chemical properties are discussed and then the various uses of residues (in fillers, bricks, gravel, and for recovery of aluminium) are summarised. Toxicological properties of fly ash and stack ash are discussed as are risks of pneumoconiosis for workers handling fly ash, and contamination of water by ashes. On the basis of present information it is concluded that risks to public health from exposure to emissions of coal fly ash from AFBC appear small or negligible as are health risk to workers in the coal fly ash processing industry. 35 refs., 5 figs., 12 tabs
International Nuclear Information System (INIS)
D'Elboux, C.V.; Paiva, I.B.
1980-01-01
Exploration for uranium carried out over a major portion of the Rio Grande do Sul Shield has revealed a number of small residual basins developed along glacially eroded channels of pre-Permian age. Mineralization of uranium occurs in two distinct sedimentary units. The lower unit consists of rhythmites overlain by a sequence of black shales, siltstones and coal seams, while the upper one is dominated by sandstones of probable fluvial origin. (Author) [pt
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Residual nilpotence and residual solubility of groups
International Nuclear Information System (INIS)
Mikhailov, R V
2005-01-01
The properties of the residual nilpotence and the residual solubility of groups are studied. The main objects under investigation are the class of residually nilpotent groups such that each central extension of these groups is also residually nilpotent and the class of residually soluble groups such that each Abelian extension of these groups is residually soluble. Various examples of groups not belonging to these classes are constructed by homological methods and methods of the theory of modules over group rings. Several applications of the theory under consideration are presented and problems concerning the residual nilpotence of one-relator groups are considered.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Research on wind field algorithm of wind lidar based on BP neural network and grey prediction
Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei
2018-01-01
This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
International Nuclear Information System (INIS)
Moore, Peter K.
2003-01-01
Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied
Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods
Czech Academy of Sciences Publication Activity Database
Paige, C. C.; Strakoš, Zdeněk
2002-01-01
Roč. 23, č. 6 (2002), s. 1899-1924 ISSN 1064-8275 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: AV0Z1030915 Keywords : linear equations * eigenproblem * large sparse matrices * iterative solutions * Krylov subspace methods * Arnoldi method * GMRES * modified Gram-Schmidt * least squares * total least squares * singular values Subject RIV: BA - General Mathematics Impact factor: 1.291, year: 2002
Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.
Energy Technology Data Exchange (ETDEWEB)
Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-09-29
The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
International Nuclear Information System (INIS)
Berecz, I.
1982-01-01
Determination of the residual gas composition in vacuum systems by a special mass spectrometric method was presented. The quadrupole mass spectrometer (QMS) and its application in thin film technology was discussed. Results, partial pressure versus time curves as well as the line spectra of the residual gases in case of the vaporization of a Ti-Pd-Au alloy were demonstrated together with the possible construction schemes of QMS residual gas analysers. (Sz.J.)
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
International Nuclear Information System (INIS)
Medina Bermudez, Clara Ines
1999-01-01
The topic of solid residues is specifically of great interest and concern for the authorities, institutions and community that identify in them a true threat against the human health and the atmosphere in the related with the aesthetic deterioration of the urban centers and of the natural landscape; in the proliferation of vectorial transmitters of illnesses and the effect on the biodiversity. Inside the wide spectrum of topics that they keep relationship with the environmental protection, the inadequate handling of solid residues and residues dangerous squatter an important line in the definition of political and practical environmentally sustainable. The industrial development and the population's growth have originated a continuous increase in the production of solid residues; of equal it forms, their composition day after day is more heterogeneous. The base for the good handling includes the appropriate intervention of the different stages of an integral administration of residues, which include the separation in the source, the gathering, the handling, the use, treatment, final disposition and the institutional organization of the administration. The topic of the dangerous residues generates more expectation. These residues understand from those of pathogen type that are generated in the establishments of health that of hospital attention, until those of combustible, inflammable type, explosive, radio-active, volatile, corrosive, reagent or toxic, associated to numerous industrial processes, common in our countries in development
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Model-based leakage localization in drinking water distribution networks using structured residuals
Puig Cayuela, Vicenç; Rosich, Albert
2013-01-01
In this paper, a new model based approach to leakage localization in drinking water networks is proposed based on generating a set of structured residuals. The residual evaluation is based on a numerical method based on an enhanced Newton-Raphson algorithm. The proposed method is suitable for water network systems because the non-linearities of the model make impossible to derive analytical residuals. Furthermore, the computed residuals are designed so that leaks are decoupled, which impro...
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Characterization of Hospital Residuals
International Nuclear Information System (INIS)
Blanco Meza, A.; Bonilla Jimenez, S.
1997-01-01
The main objective of this investigation is the characterization of the solid residuals. A description of the handling of the liquid and gassy waste generated in hospitals is also given, identifying the source where they originate. To achieve the proposed objective the work was divided in three stages: The first one was the planning and the coordination with each hospital center, in this way, to determine the schedule of gathering of the waste can be possible. In the second stage a fieldwork was made; it consisted in gathering the quantitative and qualitative information of the general state of the handling of residuals. In the third and last stage, the information previously obtained was organized to express the results as the production rate per day by bed, generation of solid residuals for sampled services, type of solid residuals and density of the same ones. With the obtained results, approaches are settled down to either determine design parameters for final disposition whether for incineration, trituration, sanitary filler or recycling of some materials, and storage politics of the solid residuals that allow to determine the gathering frequency. The study concludes that it is necessary to improve the conditions of the residuals handling in some aspects, to provide the cleaning personnel of the equipment for gathering disposition and of security, minimum to carry out this work efficiently, and to maintain a control of all the dangerous waste, like sharp or polluted materials. In this way, an appreciable reduction is guaranteed in the impact on the atmosphere. (Author) [es
Analysis of Pathfinder SST algorithm for global and regional conditions
Indian Academy of Sciences (India)
SST algorithms to improve the present accuracy of surface temperature measurements ... regions, except in the North Atlantic and adjacent seas, where the residuals are always positive. ..... the stratosphere causing significant contamination of.
Marinović, Marin; Fumić, Nera; Laginja, Stanislava; Aldo, Ivanicić
2014-10-01
Prolonged life expectancy increases the proportion of elderly population. The incidence of injury increases with older age. A variety of comorbidities (circulation disorders, diabetes mellitus, metabolic imbalances, etc.) and reduced biological tissue regeneration potential that accompanies older age, lead to a higher prevalence of chronic wounds. This poses a significant health, social and economic burden upon the society. Injuries in the elderly demand significant involvement of medical and non-medical staff in the prehospital and hospital treatment of the injured, with high material consumption and reduced quality of life in these patients, their families and caregivers. Debridement is a crucial medical procedure in the treatment of acute and chronic wounds. The aim of debridement is removal of all residues in wound bed and environment. Debridement can be conducted several times when there is proper indication. There are several ways of debridement procedure, each having advantages and disadvantages. The method of debridement is chosen by the physician or other medical professional. It is based on wound characteristics and the physician's expertise and capabilities. In the same type of wound, various types of debridement can be combined, all with the aim of faster and better wound healing.
International Nuclear Information System (INIS)
2013-06-01
The IAEA attaches great importance to the dissemination of information that can assist Member States in the development, implementation, maintenance and continuous improvement of systems, programmes and activities that support the nuclear fuel cycle and nuclear applications, and that address the legacy of past practices and accidents. However, radioactive residues are found not only in nuclear fuel cycle activities, but also in a range of other industrial activities, including: - Mining and milling of metalliferous and non-metallic ores; - Production of non-nuclear fuels, including coal, oil and gas; - Extraction and purification of water (e.g. in the generation of geothermal energy, as drinking and industrial process water; in paper and pulp manufacturing processes); - Production of industrial minerals, including phosphate, clay and building materials; - Use of radionuclides, such as thorium, for properties other than their radioactivity. Naturally occurring radioactive material (NORM) may lead to exposures at some stage of these processes and in the use or reuse of products, residues or wastes. Several IAEA publications address NORM issues with a special focus on some of the more relevant industrial operations. This publication attempts to provide guidance on managing residues arising from different NORM type industries, and on pertinent residue management strategies and technologies, to help Member States gain perspectives on the management of NORM residues
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Energy Technology Data Exchange (ETDEWEB)
Ezeilo, A N; Webster, G A [Imperial College, London (United Kingdom); Webster, P J [Salford Univ. (United Kingdom)
1997-04-01
Because neutrons can penetrate distances of up to 50 mm in most engineering materials, this makes them unique for establishing residual-stress distributions non-destructively. D1A is particularly suited for through-surface measurements as it does not suffer from instrumental surface aberrations commonly found on multidetector instruments, while D20 is best for fast internal-strain scanning. Two examples for residual-stress measurements in a shot-peened material, and in a weld are presented to demonstrate the attractive features of both instruments. (author).
Prediction of interface residue based on the features of residue interaction network.
Jiao, Xiong; Ranganathan, Shoba
2017-11-07
Protein-protein interaction plays a crucial role in the cellular biological processes. Interface prediction can improve our understanding of the molecular mechanisms of the related processes and functions. In this work, we propose a classification method to recognize the interface residue based on the features of a weighted residue interaction network. The random forest algorithm is used for the prediction and 16 network parameters and the B-factor are acting as the element of the input feature vector. Compared with other similar work, the method is feasible and effective. The relative importance of these features also be analyzed to identify the key feature for the prediction. Some biological meaning of the important feature is explained. The results of this work can be used for the related work about the structure-function relationship analysis via a residue interaction network model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Designing with residual materials
Walhout, W.; Wever, R.; Blom, E.; Addink-Dölle, L.; Tempelman, E.
2013-01-01
Many entrepreneurial businesses have attempted to create value based on the residual material streams of third parties. Based on ‘waste’ materials they designed products, around which they built their company. Such activities have the potential to yield sustainable products. Many of such companies
International Nuclear Information System (INIS)
Hwang, F-N; Wei, Z-H; Huang, T-M; Wang Weichung
2010-01-01
We develop a parallel Jacobi-Davidson approach for finding a partial set of eigenpairs of large sparse polynomial eigenvalue problems with application in quantum dot simulation. A Jacobi-Davidson eigenvalue solver is implemented based on the Portable, Extensible Toolkit for Scientific Computation (PETSc). The eigensolver thus inherits PETSc's efficient and various parallel operations, linear solvers, preconditioning schemes, and easy usages. The parallel eigenvalue solver is then used to solve higher degree polynomial eigenvalue problems arising in numerical simulations of three dimensional quantum dots governed by Schroedinger's equations. We find that the parallel restricted additive Schwarz preconditioner in conjunction with a parallel Krylov subspace method (e.g. GMRES) can solve the correction equations, the most costly step in the Jacobi-Davidson algorithm, very efficiently in parallel. Besides, the overall performance is quite satisfactory. We have observed near perfect superlinear speedup by using up to 320 processors. The parallel eigensolver can find all target interior eigenpairs of a quintic polynomial eigenvalue problem with more than 32 million variables within 12 minutes by using 272 Intel 3.0 GHz processors.
Identification of residue pairing in interacting β-strands from a predicted residue contact map.
Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng
2018-04-19
Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .
Lifetime and residual strength of materials
DEFF Research Database (Denmark)
Nielsen, Lauge Fuglsang
1997-01-01
of load amplitude, load average, fractional time under maximum load, and load frequency.The analysis includes prediction of residual strength (re-cycle strength) during the process of load cycling. It is concluded that number of cycles to failure is a very poor design criterion. It is demonstrated how...... the theory developed can be generalized also to consider non-harmonic load variations.Algorithms are presented for design purposes which may be suggested as qualified alternatives to the Palmgren-Miner's methods normally used in fatigue analysis of materials under arbitrary load variations. Prediction...
Automatic residue removal for high-NA extreme illumination
Moon, James; Nam, Byong-Sub; Jeong, Joo-Hong; Kong, Dong-Ho; Nam, Byung-Ho; Yim, Dong Gyu
2007-10-01
An epidemic for smaller node has been that, as the device architecture shrinks, lithography process requires high Numerical Aperture (NA), and extreme illumination system. This, in turn, creates many lithography problems such as low lithography process margin (Depth of Focus, Exposure Latitude), unstable Critical Dimension (CD) uniformity and restricted guideline for device design rule and so on. Especially for high NA, extreme illumination such as immersion illumination systems, above all the related problems, restricted design rule due to forbidden pitch is critical and crucial issue. This forbidden pitch is composed of numerous optical effects but majority of these forbidden pitch compose of photo resist residue and these residue must be removed to relieve some room for already tight design rule. In this study, we propose automated algorithm to remove photo resist residue due to high NA and extreme illumination condition. This algorithm automatically self assembles assist patterns based on the original design layout, therefore insuring the safety and simplicity of the generated assist pattern to the original design and removes any resist residue created by extreme illumination condition. Also we tested our automated algorithm on full chip FLASH memory device and showed the residue removal effect by using commercial verification tools as well as on actual test wafer.
Evaluation of residue-residue contact predictions in CASP9
Monastyrskyy, Bohdan; Fidelis, Krzysztof; Tramontano, Anna; Kryshtafovych, Andriy
2011-01-01
This work presents the results of the assessment of the intramolecular residue-residue contact predictions submitted to CASP9. The methodology for the assessment does not differ from that used in previous CASPs, with two basic evaluation measures
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
DEFF Research Database (Denmark)
Carbonara, Emanuela; Guerra, Alice; Parisi, Francesco
2016-01-01
Economic models of tort law evaluate the efficiency of liability rules in terms of care and activity levels. A liability regime is optimal when it creates incentives to maximize the value of risky activities net of accident and precaution costs. The allocation of primary and residual liability...... for policy makers and courts in awarding damages in a large number of real-world accident cases....
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems
Directory of Open Access Journals (Sweden)
William La Cruz
2014-05-01
Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.
Residue preference mapping of ligand fragments in the Protein Data Bank.
Wang, Lirong; Xie, Zhaojun; Wipf, Peter; Xie, Xiang-Qun
2011-04-25
The interaction between small molecules and proteins is one of the major concerns for structure-based drug design because the principles of protein-ligand interactions and molecular recognition are not thoroughly understood. Fortunately, the analysis of protein-ligand complexes in the Protein Data Bank (PDB) enables unprecedented possibilities for new insights. Herein, we applied molecule-fragmentation algorithms to split the ligands extracted from PDB crystal structures into small fragments. Subsequently, we have developed a ligand fragment and residue preference mapping (LigFrag-RPM) algorithm to map the profiles of the interactions between these fragments and the 20 proteinogenic amino acid residues. A total of 4032 fragments were generated from 71 798 PDB ligands by a ring cleavage (RC) algorithm. Among these ligand fragments, 315 unique fragments were characterized with the corresponding fragment-residue interaction profiles by counting residues close to these fragments. The interaction profiles revealed that these fragments have specific preferences for certain types of residues. The applications of these interaction profiles were also explored and evaluated in case studies, showing great potential for the study of protein-ligand interactions and drug design. Our studies demonstrated that the fragment-residue interaction profiles generated from the PDB ligand fragments can be used to detect whether these fragments are in their favorable or unfavorable environments. The algorithm for a ligand fragment and residue preference mapping (LigFrag-RPM) developed here also has the potential to guide lead chemistry modifications as well as binding residues predictions.
Exploiting residual information in the parameter choice for discrete ill-posed problems
DEFF Research Database (Denmark)
Hansen, Per Christian; Kilmer, Misha E.; Kjeldsen, Rikke Høj
2006-01-01
Most algorithms for choosing the regularization parameter in a discrete ill-posed problem are based on the norm of the residual vector. In this work we propose a different approach, where we seek to use all the information available in the residual vector. We present important relations between...
Machine for compacting solid residues
International Nuclear Information System (INIS)
Herzog, J.
1981-11-01
Machine for compacting solid residues, particularly bulky radioactive residues, constituted of a horizontally actuated punch and a fixed compression anvil, in which the residues are first compacted horizontally and then vertically. Its salient characteristic is that the punch and the compression anvil have embossments on the compression side and interpenetrating plates in the compression position [fr
Quadratic residues and non-residues selected topics
Wright, Steve
2016-01-01
This book offers an account of the classical theory of quadratic residues and non-residues with the goal of using that theory as a lens through which to view the development of some of the fundamental methods employed in modern elementary, algebraic, and analytic number theory. The first three chapters present some basic facts and the history of quadratic residues and non-residues and discuss various proofs of the Law of Quadratic Reciprosity in depth, with an emphasis on the six proofs that Gauss published. The remaining seven chapters explore some interesting applications of the Law of Quadratic Reciprocity, prove some results concerning the distribution and arithmetic structure of quadratic residues and non-residues, provide a detailed proof of Dirichlet’s Class-Number Formula, and discuss the question of whether quadratic residues are randomly distributed. The text is a valuable resource for graduate and advanced undergraduate students as well as for mathematicians interested in number theory.
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Energy Technology Data Exchange (ETDEWEB)
Jungersen, G. [Dansk Teknologisk Inst. (Denmark); Kivaisi, A.; Rubindamayugi, M. [Univ. of Dar es Salaam (Tanzania, United Republic of)
1998-05-01
The main objectives of this report are: To analyse the bioenergy potential of the Tanzanian agro-industries, with special emphasis on the Sisal industry, the largest producer of agro-industrial residues in Tanzania; and to upgrade the human capacity and research potential of the Applied Microbiology Unit at the University of Dar es Salaam, in order to ensure a scientific and technological support for future operation and implementation of biogas facilities and anaerobic water treatment systems. The experimental work on sisal residues contains the following issues: Optimal reactor set-up and performance; Pre-treatment methods for treatment of fibre fraction in order to increase the methane yield; Evaluation of the requirement for nutrient addition; Evaluation of the potential for bioethanol production from sisal bulbs. The processing of sisal leaves into dry fibres (decortication) has traditionally been done by the wet processing method, which consumes considerable quantities of water and produces large quantities of waste water. The Tanzania Sisal Authority (TSA) is now developing a dry decortication method, which consumes less water and produces a waste product with 12-15% TS, which is feasible for treatment in CSTR systems (Continously Stirred Tank Reactors). (EG)
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Deflation of eigenvalues for iterative methods in lattice QCD
International Nuclear Information System (INIS)
Darnell, Dean; Morgan, Ronald B.; Wilcox, Walter
2004-01-01
Work on generalizing the deflated, restarted GMRES algorithm, useful in lattice studies using stochastic noise methods, is reported. We first show how the multi-mass extension of deflated GMRES can be implemented. We then give a deflated GMRES method that can be used on multiple right-hand sides of Aχ = b in an efficient manner. We also discuss and give numerical results on the possibilty of combining deflated GMRES for the first right hand side with a deflated BiCGStab algorithm for the subsequent right hand sides
Immobilization of acid digestion residue
International Nuclear Information System (INIS)
Greenhalgh, W.O.; Allen, C.R.
1983-01-01
Acid digestion treatment of nuclear waste is similar to incineration processes and results in the bulk of the waste being reduced in volume and weight to some residual solids termed residue. The residue is composed of various dispersible solid materials and typically contains the resultant radioactivity from the waste. This report describes the immobilization of the residue in portland cement, borosilicate glass, and some other waste forms. Diagrams showing the cement and glass virtification parameters are included in the report as well as process steps and candidate waste product forms. Cement immobilization is simplest and probably least expensive; glass vitrification exhibits the best overall volume reduction ratio
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Evaluation of residue-residue contact predictions in CASP9
Monastyrskyy, Bohdan
2011-01-01
This work presents the results of the assessment of the intramolecular residue-residue contact predictions submitted to CASP9. The methodology for the assessment does not differ from that used in previous CASPs, with two basic evaluation measures being the precision in recognizing contacts and the difference between the distribution of distances in the subset of predicted contact pairs versus all pairs of residues in the structure. The emphasis is placed on the prediction of long-range contacts (i.e., contacts between residues separated by at least 24 residues along sequence) in target proteins that cannot be easily modeled by homology. Although there is considerable activity in the field, the current analysis reports no discernable progress since CASP8.
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
2D-RBUC for efficient parallel compression of residuals
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Landfilling of waste incineration residues
DEFF Research Database (Denmark)
Christensen, Thomas Højlund; Astrup, Thomas; Cai, Zuansi
2002-01-01
Residues from waste incineration are bottom ashes and air-pollution-control (APC) residues including fly ashes. The leaching of heavy metals and salts from the ashes is substantial and a wide spectrum of leaching tests and corresponding criteria have been introduced to regulate the landfilling...
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Statistical inference on residual life
Jeong, Jong-Hyeon
2014-01-01
This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
Development of a General Modelling Methodology for Vacuum Residue Hydroconversion
Directory of Open Access Journals (Sweden)
Pereira de Oliveira L.
2013-11-01
Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Residual stress by repair welds
International Nuclear Information System (INIS)
Mochizuki, Masahito; Toyoda, Masao
2003-01-01
Residual stress by repair welds is computed using the thermal elastic-plastic analysis with phase-transformation effect. Coupling phenomena of temperature, microstructure, and stress-strain fields are simulated in the finite-element analysis. Weld bond of a plate butt-welded joint is gouged and then deposited by weld metal in repair process. Heat source is synchronously moved with the deposition of the finite-element as the weld deposition. Microstructure is considered by using CCT diagram and the transformation behavior in the repair weld is also simulated. The effects of initial stress, heat input, and weld length on residual stress distribution are studied from the organic results of numerical analysis. Initial residual stress before repair weld has no influence on the residual stress after repair treatment near weld metal, because the initial stress near weld metal releases due to high temperature of repair weld and then stress by repair weld regenerates. Heat input has an effect for residual stress distribution, for not its magnitude but distribution zone. Weld length should be considered reducing the magnitude of residual stress in the edge of weld bead; short bead induces high tensile residual stress. (author)
RESIDUAL RISK ASSESSMENT: ETHYLENE OXIDE ...
This document describes the residual risk assessment for the Ethylene Oxide Commercial Sterilization source category. For stationary sources, section 112 (f) of the Clean Air Act requires EPA to assess risks to human health and the environment following implementation of technology-based control standards. If these technology-based control standards do not provide an ample margin of safety, then EPA is required to promulgate addtional standards. This document describes the methodology and results of the residual risk assessment performed for the Ethylene Oxide Commercial Sterilization source category. The results of this analyiss will assist EPA in determining whether a residual risk rule for this source category is appropriate.
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Nitrogen availability of biogas residues
Energy Technology Data Exchange (ETDEWEB)
El-Sayed Fouda, Sara
2011-09-07
The objectives of this study were to characterize biogas residues either unseparated or separated into a liquid and a solid phase from the fermentation of different substrates with respect to their N and C content. In addition, short and long term effects of the application of these biogas residues on the N availability and N utilization by ryegrass was investigated. It is concluded that unseparated or liquid separated biogas residues provide N at least corresponding to their ammonium content and that after the first fertilizer application the C{sub org}:N{sub org} ratio of the biogas residues was a crucial factor for the N availability. After long term application, the organic N accumulated in the soil leads to an increased release of N.
Directory of Open Access Journals (Sweden)
Júlio C. U. Coelho
Full Text Available Our objective is to report three patients with recurrent severe upper abdominal pain secondary to residual gallbladder. All patients had been subjected to cholecystectomy from 1 to 20 years before. The diagnosis was established after several episodes of severe upper abdominal pain by imaging exams: ultrasonography, tomography, or endoscopic retrograde cholangiography. Removal of the residual gallbladder led to complete resolution of symptoms. Partial removal of the gallbladder is a very rare cause of postcholecystectomy symptoms.
Residual number processing in dyscalculia ?
Cappelletti, Marinella; Price, Cathy J.
2013-01-01
Developmental dyscalculia – a congenital learning disability in understanding numerical concepts – is typically associated with parietal lobe abnormality. However, people with dyscalculia often retain some residual numerical abilities, reported in studies that otherwise focused on abnormalities in the dyscalculic brain. Here we took a different perspective by focusing on brain regions that support residual number processing in dyscalculia. All participants accurately performed semantic and ca...
Americium recovery from reduction residues
Conner, W.V.; Proctor, S.G.
1973-12-25
A process for separation and recovery of americium values from container or bomb'' reduction residues comprising dissolving the residues in a suitable acid, adjusting the hydrogen ion concentration to a desired level by adding a base, precipitating the americium as americium oxalate by adding oxalic acid, digesting the solution, separating the precipitate, and thereafter calcining the americium oxalate precipitate to form americium oxide. (Official Gazette)
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Full waveform inversion in the frequency domain using classified time-domain residual wavefields
Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan
2017-04-01
We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.
Evaluation of residue-residue contact prediction in CASP10
Monastyrskyy, Bohdan
2013-08-31
We present the results of the assessment of the intramolecular residue-residue contact predictions from 26 prediction groups participating in the 10th round of the CASP experiment. The most recently developed direct coupling analysis methods did not take part in the experiment likely because they require a very deep sequence alignment not available for any of the 114 CASP10 targets. The performance of contact prediction methods was evaluated with the measures used in previous CASPs (i.e., prediction accuracy and the difference between the distribution of the predicted contacts and that of all pairs of residues in the target protein), as well as new measures, such as the Matthews correlation coefficient, the area under the precision-recall curve and the ranks of the first correctly and incorrectly predicted contact. We also evaluated the ability to detect interdomain contacts and tested whether the difficulty of predicting contacts depends upon the protein length and the depth of the family sequence alignment. The analyses were carried out on the target domains for which structural homologs did not exist or were difficult to identify. The evaluation was performed for all types of contacts (short, medium, and long-range), with emphasis placed on long-range contacts, i.e. those involving residues separated by at least 24 residues along the sequence. The assessment suggests that the best CASP10 contact prediction methods perform at approximately the same level, and comparably to those participating in CASP9.
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
An improved single sensor parity space algorithm for sequential probability ratio test
Energy Technology Data Exchange (ETDEWEB)
Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.
1995-12-01
In our paper we propose a modification of the single sensor parity algorithm in order to make the statistical properties of the generated residual determinable in advance. The algorithm is tested via computer simulated ramp failure at the temperature readings of the pressurizer. (author).
Residual stresses around Vickers indents
International Nuclear Information System (INIS)
Pajares, A.; Guiberteau, F.; Steinbrech, R.W.
1995-01-01
The residual stresses generated by Vickers indentation in brittle materials and their changes due to annealing and surface removal were studied in 4 mol% yttria partially stabilized zirconia (4Y-PSZ). Three experimental methods to gain information about the residual stress field were applied: (i) crack profile measurements based on serial sectioning, (ii) controlled crack propagation in post indentation bending tests and (iii) double indentation tests with smaller secondary indents located around a larger primary impression. Three zones of different residual stress behavior are deduced from the experiments. Beneath the impression a crack free spherical zone of high hydrostatic stresses exists. This core zone is followed by a transition regime where indentation cracks develop but still experience hydrostatic stresses. Finally, in an outward third zone, the crack contour is entirely governed by the tensile residual stress intensity (elastically deformed region). Annealing and surface removal reduce this crack driving stress intensity. The specific changes of the residual stresses due to the post indentation treatments are described and discussed in detail for the three zones
Minimization of zirconium chlorinator residues
International Nuclear Information System (INIS)
Green, G.K.; Harbuck, D.D.
1995-01-01
Zirconium chlorinator residues contain an array of rare earths, scandium, unreacted coke, and radioactive thorium and radium. Because of the radioactivity, the residues must be disposed in special waste containment facilities. As these sites become more congested, and with stricter environmental regulations, disposal of large volumes of wastes may become more difficult. To reduce the mass of disposed material, the US Bureau of Mines (USBM) developed technology to recover rare earths, thorium and radium, and unreacted coke from these residues. This technology employs an HCl leach to solubilize over 99% of the scandium and thorium, and over 90% of the rare earths. The leach liquor is processed through several solvent extraction stages to selectively recover scandium, thorium, and rare earths. The leach residue is further leached with an organic acid to solubilize radium, thus allowing unreacted coke to be recycled to the chlorinator. The thorium and radium waste products, which comprise only 2.1% of the original residue mass, can then be sent to the radioactive waste facility
Seismic noise attenuation using an online subspace tracking algorithm
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
An improved algorithm for MFR fragment assembly
International Nuclear Information System (INIS)
Kontaxis, Georg
2012-01-01
A method for generating protein backbone models from backbone only NMR data is presented, which is based on molecular fragment replacement (MFR). In a first step, the PDB database is mined for homologous peptide fragments using experimental backbone-only data i.e. backbone chemical shifts (CS) and residual dipolar couplings (RDC). Second, this fragment library is refined against the experimental restraints. Finally, the fragments are assembled into a protein backbone fold using a rigid body docking algorithm using the RDCs as restraints. For improved performance, backbone nuclear Overhauser effects (NOEs) may be included at that stage. Compared to previous implementations of MFR-derived structure determination protocols this model-building algorithm offers improved stability and reliability. Furthermore, relative to CS-ROSETTA based methods, it provides faster performance and straightforward implementation with the option to easily include further types of restraints and additional energy terms.
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Actinide recovery from pyrochemical residues
International Nuclear Information System (INIS)
Avens, L.R.; Clifton, D.G.; Vigil, A.R.
1984-01-01
A new process for recovery of plutonium and americium from pyrochemical waste has been demonstrated. It is based on chloride solution anion exchange at low acidity, which eliminates corrosive HCl fumes. Developmental experiments of the process flowsheet concentrated on molten salt extraction (MSE) residues and gave >95% plutonium and >90% americium recovery. The recovered plutonium contained 6 = from high chloride-low acid solution. Americium and other metals are washed from the ion exchange column with 1N HNO 3 -4.8M NaCl. The plutonium is recovered, after elution, via hydroxide precipitation, while the americium is recovered via NaHCO 3 precipitation. All filtrates from the process are discardable as low-level contaminated waste. Production-scale experiments are now in progress for MSE residues. Flow sheets for actinide recovery from electrorefining and direct oxide reduction residues are presented and discussed
Actinide recovery from pyrochemical residues
International Nuclear Information System (INIS)
Avens, L.R.; Clifton, D.G.; Vigil, A.R.
1985-05-01
We demonstrated a new process for recovering plutonium and americium from pyrochemical waste. The method is based on chloride solution anion exchange at low acidity, or acidity that eliminates corrosive HCl fumes. Developmental experiments of the process flow chart concentrated on molten salt extraction (MSE) residues and gave >95% plutonium and >90% americium recovery. The recovered plutonium contained 6 2- from high-chloride low-acid solution. Americium and other metals are washed from the ion exchange column with lN HNO 3 -4.8M NaCl. After elution, plutonium is recovered by hydroxide precipitation, and americium is recovered by NaHCO 3 precipitation. All filtrates from the process can be discardable as low-level contaminated waste. Production-scale experiments are in progress for MSE residues. Flow charts for actinide recovery from electro-refining and direct oxide reduction residues are presented and discussed
Coking of residue hydroprocessing catalysts
Energy Technology Data Exchange (ETDEWEB)
Gray, M.R.; Zhao, Y.X. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemical Engineering; McKnight, C.A. [Syncrude Canada Ltd., Edmonton, AB (Canada); Komar, D.A.; Carruthers, J.D. [Cytec Industries Inc., Stamford, CT (United States)
1997-11-01
One of the major causes of deactivation of Ni/Mo and Co/Mo sulfide catalysts for hydroprocessing of heavy petroleum and bitumen fractions is coke deposition. The composition and amount of coke deposited on residue hydroprocessing catalysts depends on the composition of the liquid phase of the reactor. In the Athabasca bitumen, the high molecular weight components encourage coke deposition at temperatures of 430 to 440 degrees C and at pressures of 10 to 20 MPa hydrogen pressure. A study was conducted to determine which components in the heavy residual oil fraction were responsible for coking of catalysts. Seven samples of Athabasca vacuum residue were prepared by supercritical fluid extraction with pentane before being placed in the reactor. Carbon content and hydrodesulfurization activity was measured. It was concluded that the deposition of coke depended on the presence of asphaltenes and not on other compositional variables such as content of nitrogen, aromatic carbon or vanadium.
A method for the estimation of the residual error in the SALP approach for fault tree analysis
International Nuclear Information System (INIS)
Astolfi, M.; Contini, S.
1980-01-01
The aim of this report is the illustration of the algorithms implemented in the SALP-MP code for the estimation of the residual error. These algorithms are of more general use, and it would be possible to implement them on all codes of the series SALP previously developed, as well as, with minor modifications, to analysis procedures based on 'top-down' approaches. At the time, combined 'top-down' - 'bottom up' procedures are being studied in order to take advantage from both approaches for further reduction of computer time and better estimation of the residual error, for which the developed algorithms are still applicable
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Predicting the concentration of residual methanol in industrial formalin using machine learning
Heidkamp, William
2016-01-01
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to pr...
Leaching From Biomass Gasification Residues
DEFF Research Database (Denmark)
Allegrini, Elisa; Boldrin, Alessio; Polletini, A.
2011-01-01
The aim of the present work is to attain an overall characterization of solid residues from biomass gasification. Besides the determination of chemical and physical properties, the work was focused on the study of leaching behaviour. Compliance and pH-dependence leaching tests coupled with geoche......The aim of the present work is to attain an overall characterization of solid residues from biomass gasification. Besides the determination of chemical and physical properties, the work was focused on the study of leaching behaviour. Compliance and pH-dependence leaching tests coupled...
Carbaryl residues in maize products
International Nuclear Information System (INIS)
Zayed, S.M.A.D.; Mansour, S.A.; Mostafa, I.Y.; Hassan, A.
1976-01-01
The 14 C-labelled insecticide carbaryl was synthesized from [1- 14 C]-1-naphthol at a specific activity of 3.18mCig -1 . Maize plants were treated with the labelled insecticide under simulated conditions of agricultural practice. Mature plants were harvested and studied for distribution of total residues in untreated grains as popularly roasted and consumed, and in the corn oil and corn germ products. Total residues found under these conditions in the respective products were 0.2, 0.1, 0.45 and 0.16ppm. (author)
Combinatorial construction of toric residues
Khetan, Amit; Soprounov, Ivan
2004-01-01
The toric residue is a map depending on n+1 semi-ample divisors on a complete toric variety of dimension n. It appears in a variety of contexts such as sparse polynomial systems, mirror symmetry, and GKZ hypergeometric functions. In this paper we investigate the problem of finding an explicit element whose toric residue is equal to one. Such an element is shown to exist if and only if the associated polytopes are essential. We reduce the problem to finding a collection of partitions of the la...
Alternatives to crop residues for soil amendment
Powell, J.M.; Unger, P.W.
1997-01-01
Metadata only record In semiarid agroecosystems, crop residues can provide important benefits of soil and water conservation, nutrient cycling, and improved subsequent crop yields. However, there are frequently multiple competing uses for residues, including animal forage, fuel, and construction material. This chapter discusses the various uses of crop residues and examines alternative soil amendments when crop residues cannot be left on the soil.
Residual Structures in Latent Growth Curve Modeling
Grimm, Kevin J.; Widaman, Keith F.
2010-01-01
Several alternatives are available for specifying the residual structure in latent growth curve modeling. Two specifications involve uncorrelated residuals and represent the most commonly used residual structures. The first, building on repeated measures analysis of variance and common specifications in multilevel models, forces residual variances…
Computing Decoupled Residuals for Compact Disc Players
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Stoustrup, Jakob; Andersen, Palle
2006-01-01
a pair of residuals generated by Compact Disc Player. However, these residuals depend on the performance of position servos in the Compact Disc Player. In other publications of the same authors a pair of decoupled residuals is derived. However, the computation of these alternative residuals has been...
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
International Nuclear Information System (INIS)
Godoy, William F.; Liu Xu
2012-01-01
The present study introduces a parallel Jacobian-free Newton Krylov (JFNK) general minimal residual (GMRES) solution for the discretized radiative transfer equation (RTE) in 3D, absorbing, emitting and scattering media. For the angular and spatial discretization of the RTE, the discrete ordinates method (DOM) and the finite volume method (FVM) including flux limiters are employed, respectively. Instead of forming and storing a large Jacobian matrix, JFNK methods allow for large memory savings as the required Jacobian-vector products are rather approximated by semiexact and numerical formulations, for which convergence and computational times are presented. Parallelization of the GMRES solution is introduced in a combined memory-shared/memory-distributed formulation that takes advantage of the fact that only large vector arrays remain in the JFNK process. Results are presented for 3D test cases including a simple homogeneous, isotropic medium and a more complex non-homogeneous, non-isothermal, absorbing–emitting and anisotropic scattering medium with collimated intensities. Additionally, convergence and stability of Gram–Schmidt and Householder orthogonalizations for the Arnoldi process in the parallel GMRES algorithms are discussed and analyzed. Overall, the introduction of JFNK methods results in a parallel, yet scalable to the tested 2048 processors, and memory affordable solution to 3D radiative transfer problems without compromising the accuracy and convergence of a Newton-like solution.
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
Managing woodwaste: Yield from residue
Energy Technology Data Exchange (ETDEWEB)
Nielson, E. [LNS Services, Inc., North Vancouver, British Columbia (Canada); Rayner, S. [Pacific Waste Energy Inc., Burnaby, British Columbia (Canada)
1993-12-31
Historically, the majority of sawmill waste has been burned or buried for the sole purpose of disposal. In most jurisdictions, environmental legislation will prohibit, or render uneconomic, these practices. Many reports have been prepared to describe the forest industry`s residue and its environmental effect; although these help those looking for industry-wide or regional solutions, such as electricity generation, they have limited value for the mill manager, who has the on-hands responsibility for generation and disposal of the waste. If the mill manager can evaluate waste streams and break them down into their usable components, he can find niche market solutions for portions of the plant residue and redirect waste to poor/no-return, rather than disposal-cost, end uses. In the modern mill, residue is collected at the individual machine centre by waste conveyors that combine and mix sawdust, shavings, bark, etc. and send the result to the hog-fuel pile. The mill waste system should be analyzed to determine the measures that can improve the quality of residues and determine the volumes of any particular category before the mixing, mentioned above, occurs. After this analysis, the mill may find a niche market for a portion of its woodwaste.
Residual stress in polyethylene pipes
Czech Academy of Sciences Publication Activity Database
Poduška, Jan; Hutař, Pavel; Kučera, J.; Frank, A.; Sadílek, J.; Pinter, G.; Náhlík, Luboš
2016-01-01
Roč. 54, SEP (2016), s. 288-295 ISSN 0142-9418 R&D Projects: GA MŠk LM2015069; GA MŠk(CZ) LQ1601 Institutional support: RVO:68081723 Keywords : polyethylene pipe * residual stress * ring slitting method * lifetime estimation Subject RIV: JL - Materials Fatigue, Friction Mechanics Impact factor: 2.464, year: 2016
Solow Residuals Without Capital Stocks
DEFF Research Database (Denmark)
Burda, Michael C.; Severgnini, Battista
2014-01-01
We use synthetic data generated by a prototypical stochastic growth model to assess the accuracy of the Solow residual (Solow, 1957) as a measure of total factor productivity (TFP) growth when the capital stock in use is measured with error. We propose two alternative measurements based on curren...
Solidification process for sludge residue
International Nuclear Information System (INIS)
Pearce, K.L.
1998-01-01
This report investigates the solidification process used at 100-N Basin to solidify the N Basin sediment and assesses the N Basin process for application to the K Basin sludge residue material. This report also includes a discussion of a solidification process for stabilizing filters. The solidified matrix must be compatible with the Environmental Remediation Disposal Facility acceptance criteria
Leptogenesis and residual CP symmetry
International Nuclear Information System (INIS)
Chen, Peng; Ding, Gui-Jun; King, Stephen F.
2016-01-01
We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.
Library correlation nuclide identification algorithm
International Nuclear Information System (INIS)
Russ, William R.
2007-01-01
A novel nuclide identification algorithm, Library Correlation Nuclide Identification (LibCorNID), is proposed. In addition to the spectrum, LibCorNID requires the standard energy, peak shape and peak efficiency calibrations. Input parameters include tolerances for some expected variations in the calibrations, a minimum relative nuclide peak area threshold, and a correlation threshold. Initially, the measured peak spectrum is obtained as the residual after baseline estimation via peak erosion, removing the continuum. Library nuclides are filtered by examining the possible nuclide peak areas in terms of the measured peak spectrum and applying the specified relative area threshold. Remaining candidates are used to create a set of theoretical peak spectra based on the calibrations and library entries. These candidate spectra are then simultaneously fit to the measured peak spectrum while also optimizing the calibrations within the bounds of the specified tolerances. Each candidate with optimized area still exceeding the area threshold undergoes a correlation test. The normalized Pearson's correlation value is calculated as a comparison of the optimized nuclide peak spectrum to the measured peak spectrum with the other optimized peak spectra subtracted. Those candidates with correlation values that exceed the specified threshold are identified and their optimized activities are output. An evaluation of LibCorNID was conducted to verify identification performance in terms of detection probability and false alarm rate. LibCorNID has been shown to perform well compared to standard peak-based analyses
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Comparison of multihardware parallel implementations for a phase unwrapping algorithm
Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo
2018-04-01
Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.
Radioactive material in residues of health services residues
International Nuclear Information System (INIS)
Costa R, A. Jr.; Recio, J.C.
2006-01-01
The work presents the operational actions developed by the one organ responsible regulator for the control of the material use radioactive in Brazil. Starting from the appearance of coming radioactive material of hospitals and clinical with services of nuclear medicine, material that that is picked up and transported in specific trucks for the gathering of residuals of hospital origin, and guided one it manufactures of treatment of residuals of services of health, where they suffer radiological monitoring before to guide them for final deposition in sanitary embankment, in the city of Sao Paulo, Brazil. The appearance of this radioactive material exposes a possible one violation of the norms that govern the procedures and practices in that sector in the country. (Author)
Thermal residual stresses in amorphous thermoplastic polymers
Grassia, Luigi; D'Amore, Alberto
2010-06-01
An attempt to calculate the internal stresses in a cylindrically shaped polycarbonate (LEXAN-GE) component, subjected to an arbitrary cooling rate, will be described. The differential volume relaxation arising as a result of the different thermal history suffered by each body point was considered as the primary source of stresses build up [1-3]. A numerical routine was developed accounting for the simultaneous stress and structural relaxation processes and implemented within an Ansys® environment. The volume relaxation kinetics was modeled by coupling the KAHR (Kovacs, Aklonis, Hutchinson, Ramos) phenomenological theory [4] with the linear viscoelastic theory [5-7]. The numerical algorithm translates the specific volume theoretical predictions at each body point as applied non-mechanical loads acting on the component. The viscoelastic functions were obtained from two simple experimental data, namely the linear viscoelastic response in shear and the PVT (pressure volume temperature) behavior. The dimensionless bulk compliance was extracted from PVT data since it coincides with the memory function appearing in the KAHR phenomenological theory [7]. It is showed that the residual stress scales linearly with the logarithm of the Biot's number.
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Arteaga, Santiago Egido
1998-12-01
linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.
Density of primes in l-th power residues
Indian Academy of Sciences (India)
Given a prime number , a finite set of integers S={a1,…,am} and many -th roots of unity ril,i=1,…,m we study the distribution of primes in Q(l) such that the -th residue symbol of ai with respect to is ril, for all . We find out that this is related to the degree of the extension Q(a1l1,…,a1lm)/Q. We give an algorithm ...
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
A fast algorithm for 3D azimuthally anisotropic velocity scan
Hu, Jingwei
2014-11-11
© 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
A fast algorithm for 3D azimuthally anisotropic velocity scan
Hu, Jingwei; Fomel, Sergey; Ying, Lexing
2014-01-01
© 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Mitrinović, Dragoslav S
1993-01-01
Volume 1, i. e. the monograph The Cauchy Method of Residues - Theory and Applications published by D. Reidel Publishing Company in 1984 is the only book that covers all known applications of the calculus of residues. They range from the theory of equations, theory of numbers, matrix analysis, evaluation of real definite integrals, summation of finite and infinite series, expansions of functions into infinite series and products, ordinary and partial differential equations, mathematical and theoretical physics, to the calculus of finite differences and difference equations. The appearance of Volume 1 was acknowledged by the mathematical community. Favourable reviews and many private communications encouraged the authors to continue their work, the result being the present book, Volume 2, a sequel to Volume 1. We mention that Volume 1 is a revised, extended and updated translation of the book Cauchyjev raeun ostataka sa primenama published in Serbian by Nau~na knjiga, Belgrade in 1978, whereas the greater part ...
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Calcination/dissolution residue treatment
International Nuclear Information System (INIS)
Knight, R.C.; Creed, R.F.; Patello, G.K.; Hollenberg, G.W.; Buehler, M.F.; O'Rourke, S.M.; Visnapuu, A.; McLaughlin, D.F.
1994-09-01
Currently, high-level wastes are stored underground in steel-lined tanks at the Hanford site. Current plans call for the chemical pretreatment of these wastes before their immobilization in stable glass waste forms. One candidate pretreatment approach, calcination/dissolution, performs an alkaline fusion of the waste and creates a high-level/low-level partition based on the aqueous solubilities of the components of the product calcine. Literature and laboratory studies were conducted with the goal of finding a residue treatment technology that would decrease the quantity of high-level waste glass required following calcination/dissolution waste processing. Four elements, Fe, Ni, Bi, and U, postulated to be present in the high-level residue fraction were identified as being key to the quantity of high-level glass formed. Laboratory tests of the candidate technologies with simulant high-level residues showed reductive roasting followed by carbonyl volatilization to be successful in removing Fe, Ni, and Bi. Subsequent bench-scale tests on residues from calcination/dissolution processing of genuine Hanford Site tank waste showed Fe was separated with radioelement decontamination factors of 70 to 1,000 times with respect to total alpha activity. Thermodynamic analyses of the calcination of five typical Hanford Site tank waste compositions also were performed. The analyses showed sodium hydroxide to be the sole molten component in the waste calcine and emphasized the requirement for waste blending if fluid calcines are to be achieved. Other calcine phases identified in the thermodynamic analysis indicate the significant thermal reconstitution accomplished in calcination
Residue management at Rocky Flats
International Nuclear Information System (INIS)
Olencz, J.
1995-01-01
Past plutonium production and manufacturing operations conducted at the Rocky Flats Environmental Technology Site (RFETS) produced a variety of plutonium-contaminated by-product materials. Residues are a category of these materials and were categorized as open-quotes materials in-processclose quotes to be recovered due to their inherent plutonium concentrations. In 1989 all RFETS plutonium production and manufacturing operations were curtailed. This report describes the management of plutonium bearing liquid and solid wastes
MODEL FOR THE CORRECTION OF THE SPECIFIC GRAVITY OF BIODIESEL FROM RESIDUAL OIL
Directory of Open Access Journals (Sweden)
Tatiana Aparecida Rosa da Silva
2013-06-01
Full Text Available Biodiesel is a important fuel with economic benefits, social and environmental. The production cost of the biodiesel can be significantly lowered if the raw material is replaced by a alternative material as residual oil. In this study, the variation of specific gravity with temperature increase for diesel and biodiesel from residual oil obtained by homogeneous basic catalysis. All properties analyzed for biodiesel are within specification Brazil. The determination of the correction algorithm for the specific gravity function of temperature is also presented, and the slope of the line to diesel fuel, methylic biodiesel (BMR and ethylic biodiesel (BER from residual oil were respectively the values -0.7089, -0.7290 and -0.7277. This demonstrates the existence of difference of the model when compared chemically different fuels, like diesel and biodiesel from different sources, indicating the importance of determining the specific algorithm for the operations of conversion of volume to the reference temperature.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
Residual life management. Maintenance improvement
International Nuclear Information System (INIS)
Sainero Garcia, J.; Hevia Ruperez, F.
1995-01-01
The terms Residual Life Management, Life Cycle Management and Long-Term Management are synonymous with a concept which aims to establish efficient maintenance for the profitable and safe operation of a power plant for as long as possible. A Residual Life Management programme comprises a number of stages, of which Maintenance Evaluation focuses on how power plant maintenance practices allow the mitigation and control of component ageing. with this objective in mind, a methodology has been developed for the analysis of potential degradative phenomena acting on critical components in terms of normal power plant maintenance practices. This methodology applied to maintenance evaluation enables the setting out of a maintenance programme based on the Life Management concept, and the programme's subsequent up-dating to allow for new techniques and methods. Initial applications have shown that although, in general terms, power plant maintenance is efficient, the way in which Residual Life Management is approached requires changes in maintenance practices. These changes range from modifications to existing inspection and surveillance methods or the establishment of new ones, to the monitoring of trends or the performance of additional studies, the purpose of which is to provide an accurate evaluation of the condition of the installations and the possibility of life extension. (Author)
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
Characterisation and management of concrete grinding residuals.
Kluge, Matt; Gupta, Nautasha; Watts, Ben; Chadik, Paul A; Ferraro, Christopher; Townsend, Timothy G
2018-02-01
Concrete grinding residue is the waste product resulting from the grinding, cutting, and resurfacing of concrete pavement. Potential beneficial applications for concrete grinding residue include use as a soil amendment and as a construction material, including as an additive to Portland cement concrete. Concrete grinding residue exhibits a high pH, and though not hazardous, it is sufficiently elevated that precautions need to be taken around aquatic ecosystems. Best management practices and state regulations focus on reducing the impact on such aquatic environment. Heavy metals are present in concrete grinding residue, but concentrations are of the same magnitude as typically recycled concrete residuals. The chemical composition of concrete grinding residue makes it a useful product for some soil amendment purposes at appropriate land application rates. The presence of unreacted concrete in concrete grinding residue was examined for potential use as partial replacement of cement in new concrete. Testing of Florida concrete grinding residue revealed no dramatic reactivity or improvement in mortar strength.
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.
Polychlorinated Biphenyls (PCB) Residue Effects Database
U.S. Environmental Protection Agency — The PCB Residue Effects (PCBRes) Database was developed to assist scientists and risk assessors in correlating PCB and dioxin-like compound residues with toxic...
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
9 CFR 311.39 - Biological residues.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Biological residues. 311.39 Section... Biological residues. Carcasses, organs, or other parts of carcasses of livestock shall be condemned if it is determined that they are adulterated because of the presence of any biological residues. ...
Cycling of grain legume residue nitrogen
DEFF Research Database (Denmark)
Jensen, E.S.
1995-01-01
Symbiotic nitrogen fixation by legumes is the main input of nitrogen in ecological agriculture. The cycling of N-15-labelled mature pea (Pisum sativum L.) residues was studied during three years in small field plots and lysimeters. The residual organic labelled N declined rapidly during the initial...... management methods in order to conserve grain legume residue N sources within the soil-plant system....
Neutron residual stress measurements in linepipe
International Nuclear Information System (INIS)
Law, Michael; Gnaepel-Herold, Thomas; Luzin, Vladimir; Bowie, Graham
2006-01-01
Residual stresses in gas pipelines are generated by manufacturing and construction processes and may affect the subsequent pipe integrity. In the present work, the residual stresses in eight samples of linepipe were measured by neutron diffraction. Residual stresses changed with some coating processes. This has special implications in understanding and mitigating stress corrosion cracking, a major safety and economic problem in some gas pipelines
Natural radioactivity in petroleum residues
International Nuclear Information System (INIS)
Gazineu, M.H.P.; Gazineu, M.H.P.; Hazin, C.A.; Hazin, C.A.
2006-01-01
The oil extraction and production industry generates several types of solid and liquid wastes. Scales, sludge and water are typical residues that can be found in such facilities and that can be contaminated with Naturally Occurring Radioactive Material (N.O.R.M.). As a result of oil processing, the natural radionuclides can be concentrated in such residues, forming the so called Technologically Enhanced Naturally Occurring Radioactive Material, or T.E.N.O.R.M.. Most of the radionuclides that appear in oil and gas streams belong to the 238 U and 232 Th natural series, besides 40 K. The present work was developed to determine the radionuclide content of scales and sludge generated during oil extraction and production operations. Emphasis was given to the quantification of 226 Ra, 228 Ra and 40 K since these radionuclides,are responsible for most of the external exposure in such facilities. Samples were taken from the P.E.T.R.O.B.R.A.S. unity in the State of Sergipe, in Northeastern Brazil. They were collected directly from the inner surface of water pipes and storage tanks, or from barrels stored in the waste storage area of the E and P unit. The activity concentrations for 226 Ra, 228 Ra and 40 K were determined by using an HP Ge gamma spectrometric system. The results showed concentrations ranging from 42.7 to 2,110.0 kBq/kg for 226 Ra, 40.5 to 1,550.0 kBq/kg for 228 Ra, and 20.6 to 186.6 kBq/kg for 40 K. The results highlight the importance of determining the activity concentration of those radionuclides in oil residues before deciding whether they should be stored or discarded to the environment. (authors)
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Residual Liquefaction under Standing Waves
DEFF Research Database (Denmark)
Kirca, V.S. Ozgur; Sumer, B. Mutlu; Fredsøe, Jørgen
2012-01-01
This paper summarizes the results of an experimental study which deals with the residual liquefaction of seabed under standing waves. It is shown that the seabed liquefaction under standing waves, although qualitatively similar, exhibits features different from that caused by progressive waves....... The experimental results show that the buildup of pore-water pressure and the resulting liquefaction first starts at the nodal section and spreads towards the antinodal section. The number of waves to cause liquefaction at the nodal section appears to be equal to that experienced in progressive waves for the same...
Process to recycle shredder residue
Jody, Bassam J.; Daniels, Edward J.; Bonsignore, Patrick V.
2001-01-01
A system and process for recycling shredder residue, in which separating any polyurethane foam materials are first separated. Then separate a fines fraction of less than about 1/4 inch leaving a plastics-rich fraction. Thereafter, the plastics rich fraction is sequentially contacted with a series of solvents beginning with one or more of hexane or an alcohol to remove automotive fluids; acetone to remove ABS; one or more of EDC, THF or a ketone having a boiling point of not greater than about 125.degree. C. to remove PVC; and one or more of xylene or toluene to remove polypropylene and polyethylene. The solvents are recovered and recycled.
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Fault Severity Estimation of Rotating Machinery Based on Residual Signals
Directory of Open Access Journals (Sweden)
Fan Jiang
2012-01-01
Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Residual Stresses In 3013 Containers
International Nuclear Information System (INIS)
Mickalonis, J.; Dunn, K.
2009-01-01
The DOE Complex is packaging plutonium-bearing materials for storage and eventual disposition or disposal. The materials are handled according to the DOE-STD-3013 which outlines general requirements for stabilization, packaging and long-term storage. The storage vessels for the plutonium-bearing materials are termed 3013 containers. Stress corrosion cracking has been identified as a potential container degradation mode and this work determined that the residual stresses in the containers are sufficient to support such cracking. Sections of the 3013 outer, inner, and convenience containers, in both the as-fabricated condition and the closure welded condition, were evaluated per ASTM standard G-36. The standard requires exposure to a boiling magnesium chloride solution, which is an aggressive testing solution. Tests in a less aggressive 40% calcium chloride solution were also conducted. These tests were used to reveal the relative stress corrosion cracking susceptibility of the as fabricated 3013 containers. Significant cracking was observed in all containers in areas near welds and transitions in the container diameter. Stress corrosion cracks developed in both the lid and the body of gas tungsten arc welded and laser closure welded containers. The development of stress corrosion cracks in the as-fabricated and in the closure welded container samples demonstrates that the residual stresses in the 3013 containers are sufficient to support stress corrosion cracking if the environmental conditions inside the containers do not preclude the cracking process.
Residual Fragments after Percutaneous Nephrolithotomy
Directory of Open Access Journals (Sweden)
Kaan Özdedeli
2012-09-01
Full Text Available Clinically insignificant residual fragments (CIRFs are described as asymptomatic, noninfectious and nonobstructive stone fragments (≤4 mm remaining in the urinary system after the last session of any intervention (ESWL, URS or PCNL for urinary stones. Their insignificance is questionable since CIRFs could eventually become significant, as their presence may result in recurrent stone growth and they may cause pain and infection due to urinary obstruction. They may become the source of persistent infections and a significant portion of the patients will have a stone-related event, requiring auxilliary interventions. CT seems to be the ultimate choice of assessment. Although there is no concensus about the timing, recent data suggests that it may be performed one month after the procedure. However, imaging can be done in the immediate postoperative period, if there are no tubes blurring the assessment. There is some evidence indicating that selective medical therapy may have an impact on decreasing stone formation rates. Retrograde intrarenal surgery, with its minimally invasive nature, seems to be the best way to deal with residual fragments.
Residual number processing in dyscalculia.
Cappelletti, Marinella; Price, Cathy J
2014-01-01
Developmental dyscalculia - a congenital learning disability in understanding numerical concepts - is typically associated with parietal lobe abnormality. However, people with dyscalculia often retain some residual numerical abilities, reported in studies that otherwise focused on abnormalities in the dyscalculic brain. Here we took a different perspective by focusing on brain regions that support residual number processing in dyscalculia. All participants accurately performed semantic and categorical colour-decision tasks with numerical and non-numerical stimuli, with adults with dyscalculia performing slower than controls in the number semantic tasks only. Structural imaging showed less grey-matter volume in the right parietal cortex in people with dyscalculia relative to controls. Functional MRI showed that accurate number semantic judgements were maintained by parietal and inferior frontal activations that were common to adults with dyscalculia and controls, with higher activation for participants with dyscalculia than controls in the right superior frontal cortex and the left inferior frontal sulcus. Enhanced activation in these frontal areas was driven by people with dyscalculia who made faster rather than slower numerical decisions; however, activation could not be accounted for by response times per se, because it was greater for fast relative to slow dyscalculics but not greater for fast controls relative to slow dyscalculics. In conclusion, our results reveal two frontal brain regions that support efficient number processing in dyscalculia.
Residual number processing in dyscalculia
Directory of Open Access Journals (Sweden)
Marinella Cappelletti
2014-01-01
Full Text Available Developmental dyscalculia – a congenital learning disability in understanding numerical concepts – is typically associated with parietal lobe abnormality. However, people with dyscalculia often retain some residual numerical abilities, reported in studies that otherwise focused on abnormalities in the dyscalculic brain. Here we took a different perspective by focusing on brain regions that support residual number processing in dyscalculia. All participants accurately performed semantic and categorical colour-decision tasks with numerical and non-numerical stimuli, with adults with dyscalculia performing slower than controls in the number semantic tasks only. Structural imaging showed less grey-matter volume in the right parietal cortex in people with dyscalculia relative to controls. Functional MRI showed that accurate number semantic judgements were maintained by parietal and inferior frontal activations that were common to adults with dyscalculia and controls, with higher activation for participants with dyscalculia than controls in the right superior frontal cortex and the left inferior frontal sulcus. Enhanced activation in these frontal areas was driven by people with dyscalculia who made faster rather than slower numerical decisions; however, activation could not be accounted for by response times per se, because it was greater for fast relative to slow dyscalculics but not greater for fast controls relative to slow dyscalculics. In conclusion, our results reveal two frontal brain regions that support efficient number processing in dyscalculia.
Residual number processing in dyscalculia☆
Cappelletti, Marinella; Price, Cathy J.
2013-01-01
Developmental dyscalculia – a congenital learning disability in understanding numerical concepts – is typically associated with parietal lobe abnormality. However, people with dyscalculia often retain some residual numerical abilities, reported in studies that otherwise focused on abnormalities in the dyscalculic brain. Here we took a different perspective by focusing on brain regions that support residual number processing in dyscalculia. All participants accurately performed semantic and categorical colour-decision tasks with numerical and non-numerical stimuli, with adults with dyscalculia performing slower than controls in the number semantic tasks only. Structural imaging showed less grey-matter volume in the right parietal cortex in people with dyscalculia relative to controls. Functional MRI showed that accurate number semantic judgements were maintained by parietal and inferior frontal activations that were common to adults with dyscalculia and controls, with higher activation for participants with dyscalculia than controls in the right superior frontal cortex and the left inferior frontal sulcus. Enhanced activation in these frontal areas was driven by people with dyscalculia who made faster rather than slower numerical decisions; however, activation could not be accounted for by response times per se, because it was greater for fast relative to slow dyscalculics but not greater for fast controls relative to slow dyscalculics. In conclusion, our results reveal two frontal brain regions that support efficient number processing in dyscalculia. PMID:24266008
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Sun, Weitao
2018-01-01
The global shape of a protein molecule is believed to be dominant in determining low-frequency deformational motions. However, how structure dynamics relies on residue interactions remains largely unknown. The global residue community structure and the local residue interactions are two important coexisting factors imposing significant effects on low-frequency normal modes. In this work, an algorithm for community structure partition is proposed by integrating Miyazawa-Jernigan empirical potential energy as edge weight. A sensitivity parameter is defined to measure the effect of local residue interaction on low-frequency movement. We show that community structure is a more fundamental feature of residue contact networks. Moreover, we surprisingly find that low-frequency normal mode eigenvectors are sensitive to some local critical residue interaction pairs (CRIPs). A fair amount of CRIPs act as bridges and hold distributed structure components into a unified tertiary structure by bonding nearby communities. Community structure analysis and CRIP detection of 116 catalytic proteins reveal that breaking up of a CRIP can cause low-frequency allosteric movement of a residue at the far side of protein structure. The results imply that community structure and CRIP may be the structural basis for low-frequency motions.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
Optimisation of centroiding algorithms for photon event counting imaging
International Nuclear Information System (INIS)
Suhling, K.; Airey, R.W.; Morgan, B.L.
1999-01-01
Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
MORTAR WITH UNSERVICEABLE TIRE RESIDUES
Directory of Open Access Journals (Sweden)
J. A. Canova
2009-01-01
Full Text Available This study analyzes the effects of unserviceable tire residues on rendering mortar using lime and washed sand at a volumetric proportion of 1:6. The ripened composite was dried in an oven and combined with both cement at a volumetric proportion of 1:1.5:9 and rubber powder in proportional aggregate volumes of 6, 8, 10, and 12%. Water exudation was evaluated in the plastic state. Water absorption by capillarity, fresh shrinkage and mass loss, restrained shrinkage and mass loss, void content, flexural strength, and deformation energy under compression were evaluated in the hardened state. There was an improvement in the water exudation and water absorption by capillarity and drying shrinkage, as well as a reduction of the void content and flexural strength. The product studied significantly aided the water exudation from mortar and, capillary elevation in rendering.
MORTAR WITH UNSERVICEABLE TIRE RESIDUES
Directory of Open Access Journals (Sweden)
José Aparecido Canova
2009-12-01
Full Text Available This study analyzes the effects of unserviceable tire residues on rendering mortar using lime and washed sand at a volumetric proportion of 1:6. The ripened composite was dried in an oven and combined with both cement at a volumetric proportion of 1:1.5:9 and rubber powder in proportional aggregate volumes of 6, 8, 10, and 12%. Water exudation was evaluated in the plastic state. Water absorption by capillarity, fresh shrinkage and mass loss, restrained shrinkage and mass loss, void content, flexural strength, and deformation energy under compression were evaluated in the hardened state. There was an improvement in the water exudation and water absorption by capillarity and drying shrinkage, as well as a reduction of the void content and flexural strength. The product studied significantly aided the water exudation from mortar and, capillary elevation in rendering.
Upgraded wood residue fuels 1995
International Nuclear Information System (INIS)
Vinterbaeck, J.
1995-01-01
The Swedish market for upgraded residue fuels, i.e. briquettes, pellets and wood powder, has developed considerably during the nineties. The additional costs for the upgrading processes are regained and create a surplus in other parts of the system, e.g. in the form of higher combustion efficiencies, lower investment costs for burning equipment, lower operation costs and a diminished environmental impact. All these factors put together have resulted in a rapid growth of this part of the energy sector. In 1994 the production was 1.9 TWh, an increase of 37 % compared to the previous year. In the forthcoming heating season 1995/96 the production may reach 4 TWh. 57 refs, 11 figs, 6 tabs
Forest residues in cattle feed
Directory of Open Access Journals (Sweden)
João Elzeário Castelo Branco Iapichini
2012-12-01
Full Text Available The ruminants are capable of converting low-quality food, when they are complementes with high-energy source. Through the use of regional agricultural residues is possible to conduct more economical production systems, since energetic foods have high cost in animal production. There is very abundant availability of residues in agroforestry activities worldwide, so that if a small fraction of them were used with appropriate technical criteria they could largely meet the needs of existing herds in the world and thus meet the demands of consumption of protein of animal origin. The Southwest Region of São Paulo State has large area occupied by reforestation and wide availability of non-timber forest residues, which may represent more concentrated energetic food for ruminant production. This experiment aimed to evaluate the acceptability of ground pine (20, 30 and 40%, replacing part of the energetic food (corn, present in the composition of the concentrate and was performed at the Experimental Station of Itapetininga - Forest Institute / SMA, in the dry season of 2011. It were used four crossbred steers, mean 18 months old, average body weight of 250 kg, housed in a paddock provided with water ad libitum and covered troughs for supplementation with the experimental diet. The adjustment period of the animals was of 07 days and the measurement of the levels of consumption, physiological changes, acceptability and physiological parameters were observed during the following 25 days. The concentrate supplement was formulated based on corn (76.2%, Soybean Meal (20%, urea (2%, Ammonium sulfate (0.4%, calcite (1.4%, Mineral Core (1% and finely ground Pine Cone, replacing corn. In preparing food, the formulas were prepared to make them isoproteic/energetic, containing the following nutrient levels: 22% Crude Protein (CP and 79% of Total Nutrients (TDN. The animals received the supplement in three steps for each level of cone replaced, being offered in the
Landfill Mining of Shredder Residues
DEFF Research Database (Denmark)
Hansen, Jette Bjerre; Hyks, Jiri; Shabeer Ahmed, Nassera
In Denmark, shredder residues (SR) are classified as hazardous waste and until January 2012 the all SR were landfilled. It is estimated that more than 1.8 million tons of SR have been landfilled in mono cells. This paper describes investigations conducted at two Danish landfills. SR were excavated...... from the landfills and size fractionated in order to recover potential resources such as metal and energy and to reduce the amounts of SR left for re-landfilling. Based on the results it is estimated that 60-70% of the SR excavated could be recovered in terms of materials or energy. Only a fraction...... with particle size less than 5 mm needs to be re-landfilled at least until suitable techniques are available for recovery of materials with small particle sizes....
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
ABOUT COMPLEX OPERATIONS IN NON-POSITIONAL RESIDUE NUMBER SYSTEM
Directory of Open Access Journals (Sweden)
Yu. D. Polissky
2016-04-01
Full Text Available Purpose. The purpose of this work is the theoretical substantiation of methods for increased efficiency of execution of difficult, so-called not modular, operations in non-positional residue number system for which it is necessary to know operand digits for all grade levels. Methodology. To achieve the target the numbers are presented in odd module system, while the result of the operation is determined on the basis of establishing the operand parity. The parity is determined by finding the sum modulo for the values of the number positional characteristics for all of its modules. Algorithm of position characteristics includes two types of iteration. The first iteration is to move from this number to a smaller number, in which the remains of one or more modules are equal to zero. This is achieved by subtracting out of all the residues the value of one of them. The second iteration is to move from this number to a smaller number due to exclusion of modules, which residues are zero, by dividing this number by the product of these modules. Iterations are performed until the residues of one, some or all of the modules equal to zero and other modules are excluded. The proposed method is distinguished by its simplicity and allows you to obtain the result of the operation quickly. Findings. There are obtained rather simple solutions of not modular operations for definition of outputs beyond the range of the result of adding or subtracting pairs of numbers, comparing pairs of numbers, determining the number belonging to the specific half of the range, defining parity of numbers presented in non-positional residue number system. Originality. The work offered the new effective approaches to solve the non-modular operations of the non-positional residue number system. It seems appropriate to consider these approaches as research areas to enhance the effectiveness of the modular calculation. Practical value. The above solutions have high performance and can
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
Determination of potassium concentration in salt water for residual beta radioactivity measurements
International Nuclear Information System (INIS)
Suarez-Navarro, J.A.; Pujol, Ll.
2004-01-01
High interferences may arise in the determination of potassium concentration in salt water. Several analytical methods were studied to determine which method provided the most accurate measurements of potassium concentration. This study is relevant for radiation protection because the exact amount of potassium in water samples must be known for determinations of residual beta activity concentration. The fitting algorithm of the calibration curve and estimation of uncertainty in potassium determinations were also studied. The reproducibility of the proposed analytical method was tested by internal and external validation. Furthermore, the residual beta activity concentration of several Spanish seawater and brackish river water samples was determined using the proposed method
Residues and duality for projective algebraic varieties
Kunz, Ernst; Dickenstein, Alicia
2008-01-01
This book, which grew out of lectures by E. Kunz for students with a background in algebra and algebraic geometry, develops local and global duality theory in the special case of (possibly singular) algebraic varieties over algebraically closed base fields. It describes duality and residue theorems in terms of K�hler differential forms and their residues. The properties of residues are introduced via local cohomology. Special emphasis is given to the relation between residues to classical results of algebraic geometry and their generalizations. The contribution by A. Dickenstein gives applications of residues and duality to polynomial solutions of constant coefficient partial differential equations and to problems in interpolation and ideal membership. D. A. Cox explains toric residues and relates them to the earlier text. The book is intended as an introduction to more advanced treatments and further applications of the subject, to which numerous bibliographical hints are given.
Using cotton plant residue to produce briquettes
Energy Technology Data Exchange (ETDEWEB)
Coates, W. [University of Arizona, Tucson, AZ (United States). Bioresources Research Facility
2000-07-01
In Arizona, cotton (Gossypium) plant residue left in the field following harvest must be buried to prevent it from serving as an overwintering site for insects such as the pink bollworm. Most tillage operations employed to incorporate the residue into the soil are energy intensive and often degrade soil structure. Trials showed that cotton plant residue could be incorporated with pecan shells to produce commercially acceptable briquettes. Pecan shell briquettes containing cotton residue rather than waste paper were slightly less durable, when made using equivalent weight mixtures and moisture contents. Proximate and ultimate analyses showed the only difference among briquette samples to be a higher ash content in those made using cotton plant residue. Briquettes made with paper demonstrated longer flame out time, and lower ash percentage, compared to those made with cotton plant residue. (author)
Distribution of residues and primitive roots
Indian Academy of Sciences (India)
Replacing the function f by g, we get the required estimate for N(p, N). D. Proof of Theorem 1.1. When p = 7, we clearly see that (1, 2) is a consecutive pair of quadratic residue modulo 7. Assume that p ≥ 11. If 10 is a quadratic residue modulo p, then we have (9, 10) as a consecutive pair of quadratic residues modulo p, ...
Residual analysis for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Turner, R.; Møller, Jesper
We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Carbaryl residues in maize and processed products
International Nuclear Information System (INIS)
Qureshi, M.J.; Sattar, A. Jr.; Naqvi, M.H.
1981-01-01
Carbaryl residues in two local maize varieties were determined using a colorimetric method. No significant differences were observed for residues of the two varieties which ranged between 12.0 to 13.75 mg/kg in the crude oil, and averaged 1.04 and 0.67 mg/kg in the flour and cake respectively. In whole maize plants, carbaryl residues declined to approximately 2 mg/kg 35 days after treatment. Cooking in aqueous, oil or aqueous-oil media led to 63-83% loss of carbaryl residues, after 30 minutes. (author)
ALFA: an automated line fitting algorithm
Wesson, R.
2016-03-01
I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-05-08
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Process for measuring residual stresses
International Nuclear Information System (INIS)
Elfinger, F.X.; Peiter, A.; Theiner, W.A.; Stuecker, E.
1982-01-01
No single process can at present solve all problems. The complete destructive processes only have a limited field of application, as the component cannot be reused. However, they are essential for the basic determination of stress distributions in the field of research and development. Destructive and non-destructive processes are mainly used if investigations have to be carried out on original components. With increasing component size, the part of destructive tests becomes smaller. The main applications are: quality assurance, testing of manufactured parts and characteristics of components. Among the non-destructive test procedures, X-raying has been developed most. It gives residual stresses on the surface and on surface layers near the edges. Further development is desirable - in assessment - in measuring techniques. Ultrasonic and magnetic crack detection processes are at present mainly used in research and development, and also in quality assurance. Because of the variable depth of penetration and the possibility of automation they are gaining in importance. (orig./RW) [de
Characterization Report on Sand, Slag, and Crucible Residues and on Fluoride Residues
International Nuclear Information System (INIS)
Murray, A.M.
1999-01-01
This paper reports on the chemical characterization of the sand, slag, and crucible (SS and C) residues and the fluoride residues that may be shipped from the Rocky Flats Environmental Technology Site (RFETS) to Savannah River Site (SRS)
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks
Directory of Open Access Journals (Sweden)
Peng Jiang
2017-01-01
Full Text Available For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA. After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS. The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
On the Relations between the Attacks on Symmetric Homomorphic Encryption over the Residue Ring
Directory of Open Access Journals (Sweden)
Alina V. Trepacheva
2017-06-01
Full Text Available The paper considers the security of symmetric homomorphic cryptosystems (HC over the residue ring. The main task is to establish an equivalence between ciphertexts only attack (COA and known plaintexts attack (KPA for HC. The notion of reducibility between attacks and sufficient condition of reducibility from COA to KPA are given for this purpose. The main idea is: to prove reducibility from COA to KPA we need to find a function over residue ring being efficiently computable and having a small image size comparing with the size of residue ring. The study of reducibility existence is important since it allows to understand better the security level of symmetric HC proposed in literature. A vulnerability against KPA has been already found for the majority of these HC. Thus the reducibility presence can demonstrate that cryptosystems under the study are not secure even against COA, and therefore they are totally insecure and shouldn’t be used in practice. We give an example of reducibility from COA to KPA for residue ring being a simple field. Based on this example we show an efficient COA on one symmetric HC for small field. Also we separately consider the case of residue ring composed using number n being hard-to-factor. For such n an efficient algorithm to construct an efficiently computable function with small image is unknown so far. So further work related to cryptanalysis of existing symmetric HC will be directed into study of functions properties over residue rings modulo numbers hard for factorization.
Residuals Management and Water Pollution Control Planning.
Environmental Protection Agency, Washington, DC. Office of Public Affairs.
This pamphlet addresses the problems associated with residuals and water quality especially as it relates to the National Water Pollution Control Program. The types of residuals and appropriate management systems are discussed. Additionally, one section is devoted to the role of citizen participation in developing management programs. (CS)
Tank 12H residuals sample analysis report
Energy Technology Data Exchange (ETDEWEB)
Oji, L. N. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Shine, E. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Diprete, D. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hay, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-06-11
The Savannah River National Laboratory (SRNL) was requested by Savannah River Remediation (SRR) to provide sample preparation and analysis of the Tank 12H final characterization samples to determine the residual tank inventory prior to grouting. Eleven Tank 12H floor and mound residual material samples and three cooling coil scrape samples were collected and delivered to SRNL between May and August of 2014.
Soil water evaporation and crop residues
Crop residues have value when left in the field and also when removed from the field and sold as a commodity. Reducing soil water evaporation (E) is one of the benefits of leaving crop residues in place. E was measured beneath a corn canopy at the soil suface with nearly full coverage by corn stover...
Densification of FL Chains via Residuated Frames
Czech Academy of Sciences Publication Activity Database
Baldi, Paolo; Terui, K.
2016-01-01
Roč. 75, č. 2 (2016), s. 169-195 ISSN 0002-5240 R&D Projects: GA ČR GAP202/10/1826 Keywords : densifiability * standard completeness * residuated lattices * residuated frames * fuzzy logic Subject RIV: BA - General Mathematics Impact factor: 0.625, year: 2016
Does Bt Corn Really Produce Tougher Residues
Bt corn hybrids produce insecticidal proteins that are derived from a bacterium, Bacillus thuringiensis. There have been concerns that Bt corn hybrids produce residues that are relatively resistant to decomposition. We conducted four experiments that examined the decomposition of corn residues und...
Semantic Tagging with Deep Residual Networks
Bjerva, Johannes; Plank, Barbara; Bos, Johan
2016-01-01
We propose a novel semantic tagging task, semtagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate
Cement production from coal conversion residues
International Nuclear Information System (INIS)
Brown, L.D.; Clavenna, L.R.; Eakman, J.M.; Nahas, N.C.
1981-01-01
Cement is produced by feeding residue solids containing carbonaceous material and ash constituents obtained from converting a carbonaceous feed material into liquids and/or gases into a cement-making zone and burning the carbon in the residue solids to supply at least a portion of the energy required to convert the solids into cement
Residual stress concerns in containment analysis
International Nuclear Information System (INIS)
Costantini, F.; Kulak, R. F.; Pfeiffer, P. A.
1997-01-01
The manufacturing of steel containment vessels starts with the forming of flat plates into curved plates. A steel containment structure is made by welding individual plates together to form the sections that make up the complex shaped vessels. The metal forming and welding process leaves residual stresses in the vessel walls. Generally, the effect of metal forming residual stresses can be reduced or virtually eliminated by thermally stress relieving the vesseL In large containment vessels this may not be practical and thus the residual stresses due to manufacturing may become important. The residual stresses could possibly tiect the response of the vessel to internal pressurization. When the level of residual stresses is significant it will affect the vessel's response, for instance the yielding pressure and possibly the failure pressure. The paper will address the effect of metal forming residual stresses on the response of a generic pressure vessel to internal pressurization. A scoping analysis investigated the effect of residual forming stresses on the response of an internally pressurized vessel. A simple model was developed to gain understanding of the mechanics of the problem. Residual stresses due to the welding process were not considered in this investigation
Electrodialytic remediation of air pollution control residues
DEFF Research Database (Denmark)
Jensen, Pernille Erland
Air pollution control (APC) residue from municipal solid waste incineration (MSWI) consists of the fly ash, and, in dry and semi-dry systems, also the reaction products from the flue gas cleaning process. APC residue is considered a hazardous waste due to its high alkalinity, high content of salt...
Multisensor data fusion algorithm development
Energy Technology Data Exchange (ETDEWEB)
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Recipe for residual oil saturation determination
Energy Technology Data Exchange (ETDEWEB)
Guillory, A.J.; Kidwell, C.M.
1979-01-01
In 1978, Shell Oil Co., in conjunction with the US Department of Energy, conducted a residual oil saturation study in a deep, hot high-pressured Gulf Coast Reservoir. The work was conducted prior to initiation of CO/sub 2/ tertiary recovery pilot. Many problems had to be resolved prior to and during the residual oil saturation determination. The problems confronted are outlined such that the procedure can be used much like a cookbook in designing future studies in similar reservoirs. Primary discussion centers around planning and results of a log-inject-log operation used as a prime method to determine the residual oil saturation. Several independent methods were used to calculate the residual oil saturation in the subject well in an interval between 12,910 ft (3935 m) and 12,020 ft (3938 m). In general, these numbers were in good agreement and indicated a residual oil saturation between 22% and 24%. 10 references.
Harvesting and handling agricultural residues for energy
Energy Technology Data Exchange (ETDEWEB)
Jenkins, B.M.; Summer, H.R.
1986-05-01
Significant progress in understanding the needs for design of agricultural residue collection and handling systems has been made but additional research is required. Recommendations are made for research to (a) integrate residue collection and handling systems into general agricultural practices through the development of multi-use equipment and total harvest systems; (b) improve methods for routine evaluation of agricultural residue resources, possibly through remote sensing and image processing; (c) analyze biomass properties to obtain detailed data relevant to engineering design and analysis; (d) evaluate long-term environmental, social, and agronomic impacts of residue collection; (e) develop improved equipment with higher capacities to reduce residue collection and handling costs, with emphasis on optimal design of complete systems including collection, transportation, processing, storage, and utilization; and (f) produce standard forms of biomass fuels or products to enhance material handling and expand biomass markets through improved reliability and automatic control of biomass conversion and other utilization systems. 118 references.
Computational prediction of protein hot spot residues.
Morrow, John Kenneth; Zhang, Shuxing
2012-01-01
Most biological processes involve multiple proteins interacting with each other. It has been recently discovered that certain residues in these protein-protein interactions, which are called hot spots, contribute more significantly to binding affinity than others. Hot spot residues have unique and diverse energetic properties that make them challenging yet important targets in the modulation of protein-protein complexes. Design of therapeutic agents that interact with hot spot residues has proven to be a valid methodology in disrupting unwanted protein-protein interactions. Using biological methods to determine which residues are hot spots can be costly and time consuming. Recent advances in computational approaches to predict hot spots have incorporated a myriad of features, and have shown increasing predictive successes. Here we review the state of knowledge around protein-protein interactions, hot spots, and give an overview of multiple in silico prediction techniques of hot spot residues.
Computational Prediction of Hot Spot Residues
Morrow, John Kenneth; Zhang, Shuxing
2013-01-01
Most biological processes involve multiple proteins interacting with each other. It has been recently discovered that certain residues in these protein-protein interactions, which are called hot spots, contribute more significantly to binding affinity than others. Hot spot residues have unique and diverse energetic properties that make them challenging yet important targets in the modulation of protein-protein complexes. Design of therapeutic agents that interact with hot spot residues has proven to be a valid methodology in disrupting unwanted protein-protein interactions. Using biological methods to determine which residues are hot spots can be costly and time consuming. Recent advances in computational approaches to predict hot spots have incorporated a myriad of features, and have shown increasing predictive successes. Here we review the state of knowledge around protein-protein interactions, hot spots, and give an overview of multiple in silico prediction techniques of hot spot residues. PMID:22316154
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
One improved LSB steganography algorithm
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Graph Algorithm Animation with Grrr
Rodgers, Peter; Vidal, Natalia
2000-01-01
We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MSE mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. [Figure not available: see fulltext.
Residual stresses in zircaloy welds
International Nuclear Information System (INIS)
Santisteban, J. R.; Fernandez, L; Vizcaino, P.; Banchik, A.D.; Samper, R; Martinez, R. L; Almer, J; Motta, A.T.; Colas, K.B; Kerr, M.; Daymond, M.R
2009-01-01
Welds in Zirconium-based alloys are susceptible to hydrogen embrittlement, as H enters the material due to dissociation of water. The yield strain for hydride cracking has a complex dependence on H concentration, stress state and texture. The large thermal gradients produced by the applied heat; drastically changes the texture of the material in the heat affected zone, enhancing the susceptibility to delayed hydride cracking. Normally hydrides tend to form as platelets that are parallel to the normal direction, but when welding plates, hydride platelets may form on cooling with their planes parallel to the weld and through the thickness of the plates. If, in addition to this there are significant tensile stresses, the susceptibility of the heat affected zone to delayed hydride cracking will be increased. Here we have measured the macroscopic and microscopic residual stressed that appear after PLASMA welding of two 6mm thick Zircaloy-4 plates. The measurements were based on neutron and synchrotron diffraction experiments performed at the Isis Facility, UK, and at Advanced Photon Source, USA, respectively. The experiments allowed assessing the effect of a post-weld heat treatment consisting of a steady increase in temperature from room temperature to 450oC over a period of 4.5 hours; followed by cooling with an equivalent cooling rate. Peak tensile stresses of (175± 10) MPa along the longitudinal direction were found in the as-welded specimen, which were moderately reduced to (150±10) MPa after the heat-treatment. The parent material showed intergranular stresses of (56±4) MPa, which disappeared on entering the heat-affected zone. In-situ experiments during themal cyclong of the material showed that these intergranular stresses result from the anisotropy of the thermal expansion coefficient of the hexagonal crystal lattice. [es
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS
Directory of Open Access Journals (Sweden)
G. Sithole
2015-05-01
Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.
A polynomial time algorithm for solving the maximum flow problem in directed networks
International Nuclear Information System (INIS)
Tlas, M.
2015-01-01
An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)
Residues from waste incineration. Final report
Energy Technology Data Exchange (ETDEWEB)
Astrup, T.; Juul Pedersen, A.; Hyks, J.; Frandsen, F.J.
2009-08-15
The overall objective of the project was to improve the understanding of the formation and characteristics of residues from waste incineration. This was done focusing on the importance of the waste input and the operational conditions of the furnace. Data and results obtained from the project have been discussed in this report according to the following three overall parts: i) mass flows and element distribution, ii) flue gas/particle partitioning and corrosion/deposition aspects, and iii) residue leaching. This has been done with the intent of structuring the discussion while tacitly acknowledging that these aspects are interrelated and cannot be separated. Overall, it was found that the waste input composition had significant impact of the characteristics of the generated residues. A similar correlation between operational conditions and residue characteristics could not be observed. Consequently, the project recommend that optimization of residue quality should focus on controlling the waste input composition. The project results showed that including specific waste materials (and thereby also excluding the same materials) may have significant effects on the residue composition, residue leaching, aerosol and deposit formation.It is specifically recommended to minimize Cl in the input waste. Based on the project results, it was found that a significant potential for optimization of waste incineration exist. (author)
Use of ultrasound in petroleum residue upgradation
Energy Technology Data Exchange (ETDEWEB)
Sawarkar, A.N.; Pandit, A.B.; Samant, S.D.; Joshi, J.B. [Mumbai Univ., Mumbai (India). Inst. of Chemical Technology
2009-06-15
The importance of bottom-of-the barrel upgrading has increased in the current petroleum refining scenario because of the progressively heavier nature of crude oil. Heavy residues contain large concentrations of metals such as vanadium and nickel which foul catalysts and reduce the potential effect of residue fluidized catalytic cracking. This study showed that the cavitational energy induced by ultrasound be be successfully used to upgrade hydrocarbon mixtures. Conventional processes for the upgrading of residual feedstocks, such as thermal cracking and catalytic cracking, were carried out in the temperature range of 400-520 degrees C. Experiments were performed on 2 vacuum residues, Arabian mix vacuum residue (AMVR) and Bombay high vacuum residue (BHVR) and 1 Haldia asphalt (HA). These were subjected to acoustic cavitation for different reaction times from 15 to 120 minutes at ambient temperature and pressure. Two acoustic cavitation devices were compared, namely the ultrasonic bath and ultrasonic horn. In particular, this study compared the ability of these 2 devices to upgrade the petroleum residues to lighter, more value-added products. Different surfactants were used to examine the effect of ultrasound on upgrading the residue when emulsified in water. In order to better understand the reaction mechanism, a kinetic model was developed based on the constituents of the residue. The ultrasonic horn was found to be more effective in bringing about the upgrading than ultrasonic bath. The study also showed that the acoustic cavitation of the aqueous emulsified hydrocarbon mixture could reduce the asphaltenes content to a greater extent than the acoustic cavitation of non-emulsified hydrocarbon mixture. 20 refs., 11 tabs., 17 figs.
International Nuclear Information System (INIS)
Grady, M.
1986-01-01
I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs
Quantum algorithms and learning theory
Arunachalam, S.
2018-01-01
This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
A fast fractional difference algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
2014-01-01
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
A Fast Fractional Difference Algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
A Distributed Spanning Tree Algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Tau reconstruction and identification algorithm
Indian Academy of Sciences (India)
CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Algorithms for Decision Tree Construction
Chikalov, Igor
2011-01-01
The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28
A distributed spanning tree algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge
1988-01-01
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...
Global alignment algorithms implementations | Fatumo ...
African Journals Online (AJOL)
In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
Phase-unwrapping algorithm by a rounding-least-squares approach
Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin
2014-02-01
A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.
Residual stress analysis in thick uranium films
International Nuclear Information System (INIS)
Hodge, A.M.; Foreman, R.J.; Gallegos, G.F.
2005-01-01
Residual stress analysis was performed on thick, 1-25 μm, depleted uranium (DU) films deposited on an Al substrate by magnetron sputtering. Two distinct characterization techniques were used to measure substrate curvature before and after deposition. Stress evaluation was performed using the Benabdi/Roche equation, which is based on beam theory of a bi-layer material. The residual stress evolution was studied as a function of coating thickness and applied negative bias voltage (0, -200, -300 V). The stresses developed were always compressive; however, increasing the coating thickness and applying a bias voltage presented a trend towards more tensile stresses and thus an overall reduction of residual stresses
Residues in food derived from animals
International Nuclear Information System (INIS)
Grossklaus, D.
1989-01-01
The first chapter presents a survey of fundamentals and methods of the detection and analysis of residues in food derived from animals, also referring to the resulting health hazards to man, and to the relevant legal provisions. The subsequent chapters have been written by experts of the Federal Health Office, each dealing with particular types of residues such as those of veterinary drugs, additives to animal feeds, pesticide residues, and with environmental pollutants and the contamination of animal products with radionuclides. (MG) With 35 figs., 61 tabs [de
Novel medical image enhancement algorithms
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....
Guidelines for selection and presentation of residue values of pesticides
Velde-Koerts T van der; Hoeven-Arentzen PH van; Ossendorp BC; RIVM-SIR
2004-01-01
Pesticide residue assessments are executed to establish legal limits, called Maximum Residue Limits (MRLs). MRLs are derived from the results of these pesticide residue trials, which are performed according to critical Good Agricultural Practice. Only one residue value per residue trial may be
Model-based fault detection algorithm for photovoltaic system monitoring
Harrou, Fouzi
2018-02-12
Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection.
Xu, Cheng; Liu, Jiamei; Yang, Weifeng; Shu, Yayun; Wei, Zhipeng; Zheng, Weiwei; Feng, Xin; Zhou, Fengfeng
2018-04-01
Transcriptomic and methylomic patterns represent two major OMIC data sources impacted by both inheritable genetic information and environmental factors, and have been widely used as disease diagnosis and prognosis biomarkers. Modern transcriptomic and methylomic profiling technologies detect the status of tens of thousands or even millions of probing residues in the human genome, and introduce a major computational challenge for the existing feature selection algorithms. This study proposes a three-step feature selection algorithm, TriVote, to detect a subset of transcriptomic or methylomic residues with highly accurate binary classification performance. TriVote outperforms both filter and wrapper feature selection algorithms with both higher classification accuracy and smaller feature number on 17 transcriptomes and two methylomes. Biological functions of the methylome biomarkers detected by TriVote were discussed for their disease associations. An easy-to-use Python package is also released to facilitate the further applications.
Radiation doses from residual radioactivity
International Nuclear Information System (INIS)
Okajima, Shunzo; Fujita, Shoichiro; Harley, John H.
1987-01-01
requires knowing the location of the person to within about 200 m from the time of the explosion to a few weeks afterwards. This is an effort that might be comparable to the present shielding study for survivors. The sizes of the four exposed groups are relatively small; however, the number has been estimated only for those exposed to fallout in the Nishiyama district of Nagasaki. Okajima listed the population of Nishiyama as about 600 at the time of the bomb. No figures are available for the other three groups. The individual exposures from residual radiation may not be significant compared with the direct radiation at the time of the bomb. On the other hand, individuals with potential exposure from these sources are dubious candidates for inclusion in a cohort that was presumably not exposed. For comparison with organ doses estimated in other parts of this program, the exposure estimates are converted to absorbed dose in tissue. The first conversion of exposure to absorbed dose in air uses the factor rad in air 0.87 x exposure in R. UNSCEAR uses an average combined factor of 0.7 to convert absorbed dose in air to absorbed dose in tissue for the whole body. This factor accounts for the change in material (air to tissue) and for backscatter and the shielding afforded by other tissues of the body. No allowance for shielding by buildings has been included here. The cumulative fallout exposures given above become absorbed doses in tissue of 12 to 24 rad for Nagasaki and 0.6 to 2 rad for Hiroshima. The cumulative exposures from induced radioactivity become absorbed doses in tissue of 18 to 24 rad for Nagasaki and about 50 rad for Hiroshima. (author)
Cyolane residues in milk of lactating goats
International Nuclear Information System (INIS)
Zayed, S.M.A.D.; Osman, A.; Fakhr, I.M.I.
1981-01-01
Consecutive feeding of lactating goats with 14 C-alkyl labelled cyolane for 5 days at dietary levels 8 and 16 ppm resulted in the appearance of measurable insecticide residues in milk (0.02-0.04 mg/kg). The residue levels were markedly reduced after a withdrawal period of 7 days. Analysis of urine and milk residues showed the presence of similar metabolites in addition to the parent compound. The major part of the residue consisted of mono-, diethyl phosphate and 2 hydrophilic unknown metabolites. The erythrocyte cholinesterase activity was reduced to about 50% after 24 hours whereas the plasma enzyme was only slightly affected. The animals remained symptom-free during the experimental period. (author)
Surgical treatment for residual or recurrent strabismus
Directory of Open Access Journals (Sweden)
Tao Wang
2014-12-01
Full Text Available Although the surgical treatment is a relatively effective and predictable method for correcting residual or recurrent strabismus, such as posterior fixation sutures, medial rectus marginal myotomy, unilateral or bilateral rectus re-recession and resection, unilateral lateral rectus recession and adjustable suture, no standard protocol is established for the surgical style. Different surgical approaches have been recommended for correcting residual or recurrent strabismus. The choice of the surgical procedure depends on the former operation pattern and the surgical dosages applied on the patients, residual or recurrent angle of deviation and the operator''s preference and experience. This review attempts to outline recent publications and current opinion in the management of residual or recurrent esotropia and exotropia.
Recovery of transuranics from process residues
International Nuclear Information System (INIS)
Gray, J.H.; Gray, L.W.
1987-01-01
Process residues are generated at both the Rocky Flats Plant (RFP) and the Savannah River Plant (SRP) during aqueous chemical and pyrochemical operations. Frequently, process operations will result in either impure products or produce residues sufficiently contaminated with transuranics to be nondiscardable as waste. Purification and recovery flowsheets for process residues have been developed to generate solutions compatible with subsequent Purex operations and either solid or liquid waste suitable for disposal. The ''scrub alloy'' and the ''anode heel alloy'' are examples of materials generated at RFP which have been processed at SRP using the developed recovery flowsheets. Examples of process residues being generated at SRP for which flowsheets are under development include LECO crucibles and alpha-contaminated hydraulic oil
U.S. Isostatic Residual Gravity Grid
National Oceanic and Atmospheric Administration, Department of Commerce — isores.bin - standard grid containing isostatic residual gravity map for U.S. Grid interval = 4 km. Projection is Albers (central meridian = 96 degrees West; base...
Management of stormwater facility maintenance residuals
1998-06-01
Current research on stormwater maintenance residuals has revealed that the source and nature of these materials is extremely variable, that regulation can be ambiguous, and handling can be costly and difficult. From a regulatory perspective, data ind...
Residual stresses in Inconel 718 engine disks
Directory of Open Access Journals (Sweden)
Dahan Yoann
2014-01-01
Full Text Available Aubert&Duval has developed a methodology to establish a residual stress model for Inconel 718 engine discs. To validate the thermal, mechanical and metallurgical parts of the model, trials on lab specimens with specific geometry were carried out. These trials allow a better understanding of the residual stress distribution and evolution during different processes (quenching, ageing, machining. A comparison between experimental and numerical results reveals the residual stresses model accuracy. Aubert&Duval has also developed a mechanical properties prediction model. Coupled with the residual stress prediction model, Aubert&Duval can now propose improvements to the process of manufacturing in Inconel 718 engine disks. This model enables Aubert&Duval customers and subcontractors to anticipate distortions issues during machining. It could also be usedt to optimise the engine disk life.
[Development of residual voltage testing equipment].
Zeng, Xiaohui; Wu, Mingjun; Cao, Li; He, Jinyi; Deng, Zhensheng
2014-07-01
For the existing measurement methods of residual voltage which can't turn the power off at peak voltage exactly and simultaneously display waveforms, a new residual voltage detection method is put forward in this paper. First, the zero point of the power supply is detected with zero cross detection circuit and is inputted to a single-chip microcomputer in the form of pulse signal. Secend, when the zero point delays to the peak voltage, the single-chip microcomputer sends control signal to power off the relay. At last, the waveform of the residual voltage is displayed on a principal computer or oscilloscope. The experimental results show that the device designed in this paper can turn the power off at peak voltage and is able to accurately display the voltage waveform immediately after power off and the standard deviation of the residual voltage is less than 0.2 V at exactly one second and later.
Residual extrapolation operators for efficient wavefield construction
Alkhalifah, Tariq Ali
2013-01-01
and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms
Efficient particle filtering through residual nudging
Luo, Xiaodong
2013-05-15
We introduce an auxiliary technique, called residual nudging, to the particle filter to enhance its performance in cases where it performs poorly. The main idea of residual nudging is to monitor and, if necessary, adjust the residual norm of a state estimate in the observation space so that it does not exceed a pre-specified threshold. We suggest a rule to choose the pre-specified threshold, and construct a state estimate accordingly to achieve this objective. Numerical experiments suggest that introducing residual nudging to a particle filter may (substantially) improve its performance, in terms of filter accuracy and/or stability against divergence, especially when the particle filter is implemented with a relatively small number of particles. © 2013 Royal Meteorological Society.
Adjustment Criterion and Algorithm in Adjustment Model with Uncertain
Directory of Open Access Journals (Sweden)
SONG Yingchun
2015-02-01
Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.
Handling of wet residues in industry
DEFF Research Database (Denmark)
Villanueva, Alejandro
is fundamental in most disposal routes for clarifying the possibility of treating the residue. The better the characterisation from the start is, the easier the assessment of the feasible disposal alternatives becomes. The decision about the handling/disposal solution for the residue is a trade-off between......, and can depend on factors such as the investment capacity, the relationships with the stakeholders, or the promotion of its environmental profile....
Environmental assessment of incinerator residue utilisation
Toller, Susanna
2008-01-01
In Sweden, utilisation of incinerator residues outside disposal areas is restricted by environmental concerns, as such residues commonly contain greater amounts of potentially toxic trace elements than the natural materials they replace. On the other hand, utilisation can also provide environmental benefits by decreasing the need for landfill and reducing raw material extraction. This thesis provides increased knowledge and proposes better approaches for environmental assessment of incinerat...
Fate and Transport of Colloidal Energetic Residues
2015-07-01
laser confocal microscopy was developed and evaluated. Spectral imaging has been applied widely for chromosome karyotype analysis (61), as well as...Walsh et al. (2010) (55), who reported that the timeframe for complete disappearance of the residues (based on visual inspection) was shorter than...enhanced disappearance of residues is that the particulates produced by precipitation- driven (or even tidal flooding) weathering lead to faster
Environmental dredging residual generation and management.
Patmont, Clay; LaRosa, Paul; Narayanan, Raghav; Forrest, Casey
2018-05-01
The presence and magnitude of sediment contamination remaining in a completed dredge area can often dictate the success of an environmental dredging project. The need to better understand and manage this remaining contamination, referred to as "postdredging residuals," has increasingly been recognized by practitioners and investigators. Based on recent dredging projects with robust characterization programs, it is now understood that the residual contamination layer in the postdredging sediment comprises a mixture of contaminated sediments that originate from throughout the dredge cut. This mixture of contaminated sediments initially exhibits fluid mud properties that can contribute to sediment transport and contamination risk outside of the dredge area. This article reviews robust dredging residual evaluations recently performed in the United States and Canada, including the Hudson River, Lower Fox River, Ashtabula River, and Esquimalt Harbour, along with other projects. These data better inform the understanding of residuals generation, leading to improved models of dredging residual formation to inform remedy evaluation, selection, design, and implementation. Data from these projects confirm that the magnitude of dredging residuals is largely determined by site conditions, primarily in situ sediment fluidity or liquidity as measured by dry bulk density. While the generation of dredging residuals cannot be avoided, residuals can be successfully and efficiently managed through careful development and implementation of site-specific management plans. Integr Environ Assess Manag 2018;14:335-343. © 2018 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2018 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
Protein structure based prediction of catalytic residues.
Fajardo, J Eduardo; Fiser, Andras
2013-02-22
Worldwide structural genomics projects continue to release new protein structures at an unprecedented pace, so far nearly 6000, but only about 60% of these proteins have any sort of functional annotation. We explored a range of features that can be used for the prediction of functional residues given a known three-dimensional structure. These features include various centrality measures of nodes in graphs of interacting residues: closeness, betweenness and page-rank centrality. We also analyzed the distance of functional amino acids to the general center of mass (GCM) of the structure, relative solvent accessibility (RSA), and the use of relative entropy as a measure of sequence conservation. From the selected features, neural networks were trained to identify catalytic residues. We found that using distance to the GCM together with amino acid type provide a good discriminant function, when combined independently with sequence conservation. Using an independent test set of 29 annotated protein structures, the method returned 411 of the initial 9262 residues as the most likely to be involved in function. The output 411 residues contain 70 of the annotated 111 catalytic residues. This represents an approximately 14-fold enrichment of catalytic residues on the entire input set (corresponding to a sensitivity of 63% and a precision of 17%), a performance competitive with that of other state-of-the-art methods. We found that several of the graph based measures utilize the same underlying feature of protein structures, which can be simply and more effectively captured with the distance to GCM definition. This also has the added the advantage of simplicity and easy implementation. Meanwhile sequence conservation remains by far the most influential feature in identifying functional residues. We also found that due the rapid changes in size and composition of sequence databases, conservation calculations must be recalibrated for specific reference databases.
Improved autonomous star identification algorithm
International Nuclear Information System (INIS)
Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong
2015-01-01
The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)
Portable Health Algorithms Test System
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Quantum algorithm for linear regression
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Ensemble Kalman filtering with residual nudging
Luo, X.
2012-10-03
Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Fluorescence imaging to quantify crop residue cover
Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.
1994-01-01
Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.
Ensemble Kalman filtering with residual nudging
Directory of Open Access Journals (Sweden)
Xiaodong Luo
2012-10-01
Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Detecting organic gunpowder residues from handgun use
MacCrehan, William A.; Ricketts, K. Michelle; Baltzersen, Richard A.; Rowe, Walter F.
1999-02-01
The gunpowder residues that remain after the use of handguns or improvised explosive devices pose a challenge for the forensic investigator. Can these residues be reliably linked to a specific gunpowder or ammunition? We investigated the possibility by recovering and measuring the composition of organic additives in smokeless powder and its post-firing residues. By determining gunpowder additives such as nitroglycerin, dinitrotoluene, ethyl- and methylcentralite, and diphenylamine, we hope to identify the type of gunpowder in the residues and perhaps to provide evidence of a match to a sample of unfired powder. The gunpowder additives were extracted using an automated technique, pressurized fluid extraction (PFE). The conditions for the quantitative extraction of the additives using neat and solvent-modified supercritical carbon dioxide were investigated. All of the major gunpowder additives can be determined with baseline resolution using capillary electrophoresis (CE) with a micellar agent and UV absorbance detection. A study of candidate internal standards for use in the CE method is also presented. The PFE/CE technique is used to evaluate a new residue sampling protocol--asking shooters to blow their noses. In addition, an initial investigation of the compositional differences among unfired and post-fired .22 handgun residues is presented.
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Disposal of leached residual in heap leaching by neutralization
International Nuclear Information System (INIS)
Wang Jingmin
1993-01-01
The disposal results of leached residual with lime are described. Using the ratio of residual to lime being 100 : 1 the ideal disposal results were obtained with the effluent of the neutralized residual close to neutral
Efficient identification of critical residues based only on protein structure by network analysis.
Directory of Open Access Journals (Sweden)
Michael P Cusack
2007-05-01
Full Text Available Despite the increasing number of published protein structures, and the fact that each protein's function relies on its three-dimensional structure, there is limited access to automatic programs used for the identification of critical residues from the protein structure, compared with those based on protein sequence. Here we present a new algorithm based on network analysis applied exclusively on protein structures to identify critical residues. Our results show that this method identifies critical residues for protein function with high reliability and improves automatic sequence-based approaches and previous network-based approaches. The reliability of the method depends on the conformational diversity screened for the protein of interest. We have designed a web site to give access to this software at http://bis.ifc.unam.mx/jamming/. In summary, a new method is presented that relates critical residues for protein function with the most traversed residues in networks derived from protein structures. A unique feature of the method is the inclusion of the conformational diversity of proteins in the prediction, thus reproducing a basic feature of the structure/function relationship of proteins.
Array architectures for iterative algorithms
Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas
1987-01-01
Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Instance-specific algorithm configuration
Malitsky, Yuri
2014-01-01
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,
Subcubic Control Flow Analysis Algorithms
DEFF Research Database (Denmark)
Midtgaard, Jan; Van Horn, David
We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model
Directory of Open Access Journals (Sweden)
WANG Bin
2015-06-01
Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.
Assessing the Availability of Wood Residues and Residue Markets in Virginia
Alderman, Delton R. Jr.
1998-01-01
A statewide mail survey of primary and secondary wood product manufacturers was undertaken to quantify the production and consumption of wood residues in Virginia. Two hundred and sixty-six wood product manufacturers responded to the study and they provided information on the production, consumption, markets, income or disposal costs, and disposal methods of wood residues. Hardwood and pine sawmills produce approximately 66 percent of Virginia's wood residues. Virginia's wood product man...
A survey of residual analysis and a new test of residual trend.
McDowell, J J; Calvin, Olivia L; Klapes, Bryan
2016-05-01
A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test. © 2016 Society for the Experimental Analysis of Behavior.
Planar graphs theory and algorithms
Nishizeki, T
1988-01-01
Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.
Optimally stopped variational quantum algorithms
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Fluid-structure-coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Aggregation Algorithms in Heterogeneous Tables
Directory of Open Access Journals (Sweden)
Titus Felix FURTUNA
2006-01-01
Full Text Available The heterogeneous tables are most used in the problem of aggregation. A solution for this problem is to standardize these tables of figures. In this paper, we proposed some methods of aggregation based on the hierarchical algorithms.
Designing algorithms using CAD technologies
Directory of Open Access Journals (Sweden)
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
Sustainable System for Residual Hazards Management
International Nuclear Information System (INIS)
Kevin M. Kostelnik; James H. Clarke; Jerry L. Harbour
2004-01-01
Hazardous, radioactive and other toxic substances have routinely been generated and subsequently disposed of in the shallow subsurface throughout the world. Many of today's waste management techniques do not eliminate the problem, but rather only concentrate or contain the hazardous contaminants. Residual hazards result from the presence of hazardous and/or contaminated material that remains on-site following active operations or the completion of remedial actions. Residual hazards pose continued risk to humans and the environment and represent a significant and chronic problem that require continuous long-term management (i.e. >1000 years). To protect human health and safeguard the natural environment, a sustainable system is required for the proper management of residual hazards. A sustainable system for the management of residual hazards will require the integration of engineered, institutional and land-use controls to isolate residual contaminants and thus minimize the associated hazards. Engineered controls are physical modifications to the natural setting and ecosystem, including the site, facility, and/or the residual materials themselves, in order to reduce or eliminate the potential for exposure to contaminants of concern (COCs). Institutional controls are processes, instruments, and mechanisms designed to influence human behavior and activity. System failure can involve hazardous material escaping from the confinement because of system degradation (i.e., chronic or acute degradation) or by external intrusion of the biosphere into the contaminated material because of the loss of institutional control. An ongoing analysis of contemporary and historic sites suggests that the significance of the loss of institutional controls is a critical pathway because decisions made during the operations/remedial action phase, as well as decisions made throughout the residual hazards management period, are key to the long-term success of the prescribed system. In fact
Directory of Open Access Journals (Sweden)
Hee Han
2018-03-01
Full Text Available An important task in forest residue recovery operations is to select the most cost-efficient feedstock logistics system for a given distribution of residue piles, road access, and available machinery. Notable considerations include inaccessibility of treatment units to large chip vans and frequent, long-distance mobilization of forestry equipment required to process dispersed residues. In this study, we present optimized biomass feedstock logistics on a tree-shaped road network that take into account the following options: (1 grinding residues at the site of treatment and forwarding ground residues either directly to bioenergy facility or to a concentration yard where they are transshipped to large chip vans, (2 forwarding residues to a concentration yard where they are stored and ground directly into chip vans, and (3 forwarding residues to a nearby grinder location and forwarding the ground materials. A mixed-integer programming model coupled with a network algorithm was developed to solve the problem. The model was applied to recovery operations on a study site in Colorado, USA, and the optimal solution reduced the cost of logistics up to 11% compared to the conventional system. This is an important result because this cost reduction propagates downstream through the biomass supply chain, reducing production costs for bioenergy and bioproducts.
A filtered backprojection algorithm with characteristics of the iterative landweber algorithm
L. Zeng, Gengsheng
2012-01-01
Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Autonomous algorithms for image restoration
Griniasty , Meir
1994-01-01
We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.
Algorithms and Public Service Media
Sørensen, Jannick Kirk; Hutchinson, Jonathon
2018-01-01
When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a pra...
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
Algorithm for programming function generators
International Nuclear Information System (INIS)
Bozoki, E.
1981-01-01
The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described
Neutronic rebalance algorithms for SIMMER
International Nuclear Information System (INIS)
Soran, P.D.
1976-05-01
Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques
Euclidean shortest paths exact or approximate algorithms
Li, Fajie
2014-01-01
This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.
A Global algorithm for linear radiosity
Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier
1993-01-01
A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.
Cascade Error Projection: A New Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Efficient RNA structure comparison algorithms.
Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason
2017-12-01
Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2016-01-01
. A multilevel optimization strategy is adopted using a customized genetic algorithm developed for optimizing cellular scanning strategy for selective laser melting, with an objective of reducing residual stresses and deformations. The resulting thermo-mechanically optimized cellular scanning strategies......, a calibrated, fast, multiscale thermal model coupled with a 3D finite element mechanical model is used to simulate residual stress formation and deformations during selective laser melting. The resulting reduction in thermal model computation time allows evolutionary algorithm-based optimization of the process...
Pesticide residues in birds and mammals
Stickel, L.F.; Edwards, C.A.
1973-01-01
SUMMARY: Residues of organochlorine pesticides and their breakdown products are present in the tissues of essentially all wild birds throughout the world. These chemicals accumulate in fat from a relatively small environmental exposure. DDE and dieldrin are most prevalent. Others, such as heptachlor epoxide, chlordane, endrin, and benzene hexachloride also occur, the quantities and kinds generally reflecting local or regional use. Accumulation may be sufficient to kill animals following applications for pest control. This has occurred in several large-scale programmes in the United States. Mortality has also resulted from unintentional leakage of chemical from commercial establishments. Residues may persist in the environment for many years, exposing successive generations of animals. In general, birds that eat other birds, or fish, have higher residues than those that eat seeds and vegetation. The kinetic processes of absorption, metabolism, storage, and output differ according to both kind of chemical and species of animal. When exposure is low and continuous, a balance between intake and excretion may be achieved. Residues reach a balance at an approximate animal body equilibrium or plateau; the storage is generally proportional to dose. Experiments with chickens show that dieldrin and heptachlor epoxide have the greatest propensity for storage, endrin next, then DDT, then lindane. The storage of DDT was complicated by its metabolism to DDE and DDD, but other studies show that DDE has a much greater propensity for storage than either DDD or DDT. Methoxychlor has little cumulative capacity in birds. Residues in eggs reflect and parallel those in the parent bird during accumulation, equilibrium, and decline when dosage is discontinued. Residues with the greatest propensity for storage are also lost most slowly. Rate of loss of residues can be modified by dietary components and is speeded by weight loss of the animal. Under sublethal conditions of continuous
Efficient predictive algorithms for image compression
Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla
2017-01-01
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...
Disposal of Rocky Flats residues as waste
International Nuclear Information System (INIS)
Dustin, D.F.; Sendelweck, V.S.
1993-01-01
Work is underway at the Rocky Flats Plant to evaluate alternatives for the removal of a large inventory of plutonium-contaminated residues from the plant. One alternative under consideration is to package the residues as transuranic wastes for ultimate shipment to the Waste Isolation Pilot Plant. Current waste acceptance criteria and transportation regulations require that approximately 1000 cubic yards of residues be repackaged to produce over 20,000 cubic yards of WIPP certified waste. The major regulatory drivers leading to this increase in waste volume are the fissile gram equivalent, surface radiation dose rate, and thermal power limits. In the interest of waste minimization, analyses have been conducted to determine, for each residue type, the controlling criterion leading to the volume increase, the impact of relaxing that criterion on subsequent waste volume, and the means by which rules changes may be implemented. The results of this study have identified the most appropriate changes to be proposed in regulatory requirements in order to minimize the costs of disposing of Rocky Flats residues as transuranic wastes
Reclamation of plutonium from pyrochemical processing residues
International Nuclear Information System (INIS)
Gray, L.W.; Gray, J.H.; Holcomb, H.P.; Chostner, D.F.
1987-04-01
Savannah River Laboratory (SRL), Savannah River Plant (SRP), and Rocky Flats Plant (RFP) have jointly developed a process to recover plutonium from molten salt extraction residues. These NaCl, KCL, and MgCl 2 residues, which are generated in the pyrochemical extraction of 241 Am from aged plutonium metal, contain up to 25 wt % dissolved plutonium and up to 2 wt % americium. The overall objective was to develop a process to convert these residues to a pure plutonium metal product and discardable waste. To meet this objective a combination of pyrochemical and aqueous unit operations was used. The first step was to scrub the salt residue with a molten metal (aluminum and magnesium) to form a heterogeneous ''scrub alloy'' containing nominally 25 wt % plutonium. This unit operation, performed at RFP, effectively separated the actinides from the bulk of the chloride salts. After packaging in aluminum cans, the ''scrub alloy'' was then dissolved in a nitric acid - hydrofluoric acid - mercuric nitrate solution at SRP. Residual chloride was separated from the dissolver solution by precipitation with Hg 2 (NO 3 ) 2 followed by centrifuging. Plutonium was then separated from the aluminum, americium and magnesium using the Purex solvent extraction system. The 241 Am was diverted to the waste tank farm, but could be recovered if desired
Rare Earth Element Phases in Bauxite Residue
Directory of Open Access Journals (Sweden)
Johannes Vind
2018-02-01
Full Text Available The purpose of present work was to provide mineralogical insight into the rare earth element (REE phases in bauxite residue to improve REE recovering technologies. Experimental work was performed by electron probe microanalysis with energy dispersive as well as wavelength dispersive spectroscopy and transmission electron microscopy. REEs are found as discrete mineral particles in bauxite residue. Their sizes range from <1 μm to about 40 μm. In bauxite residue, the most abundant REE bearing phases are light REE (LREE ferrotitanates that form a solid solution between the phases with major compositions (REE,Ca,Na(Ti,FeO3 and (Ca,Na(Ti,FeO3. These are secondary phases formed during the Bayer process by an in-situ transformation of the precursor bauxite LREE phases. Compared to natural systems, the indicated solid solution resembles loparite-perovskite series. LREE particles often have a calcium ferrotitanate shell surrounding them that probably hinders their solubility. Minor amount of LREE carbonate and phosphate minerals as well as manganese-associated LREE phases are also present in bauxite residue. Heavy REEs occur in the same form as in bauxites, namely as yttrium phosphates. These results show that the Bayer process has an impact on the initial REE mineralogy contained in bauxite. Bauxite residue as well as selected bauxites are potentially good sources of REEs.
Mobility of organic carbon from incineration residues
International Nuclear Information System (INIS)
Ecke, Holger; Svensson, Malin
2008-01-01
Dissolved organic carbon (DOC) may affect the transport of pollutants from incineration residues when landfilled or used in geotechnical construction. The leaching of dissolved organic carbon (DOC) from municipal solid waste incineration (MSWI) bottom ash and air pollution control residue (APC) from the incineration of waste wood was investigated. Factors affecting the mobility of DOC were studied in a reduced 2 6-1 experimental design. Controlled factors were treatment with ultrasonic radiation, full carbonation (addition of CO 2 until the pH was stable for 2.5 h), liquid-to-solid (L/S) ratio, pH, leaching temperature and time. Full carbonation, pH and the L/S ratio were the main factors controlling the mobility of DOC in the bottom ash. Approximately 60 weight-% of the total organic carbon (TOC) in the bottom ash was available for leaching in aqueous solutions. The L/S ratio and pH mainly controlled the mobilization of DOC from the APC residue. About 93 weight-% of TOC in the APC residue was, however, not mobilized at all, which might be due to a high content of elemental carbon. Using the European standard EN 13 137 for determination of total organic carbon (TOC) in MSWI residues is inappropriate. The results might be biased due to elemental carbon. It is recommended to develop a TOC method distinguishing between organic and elemental carbon
[Migrants from disposable gloves and residual acrylonitrile].
Wakui, C; Kawamura, Y; Maitani, T
2001-10-01
Disposable gloves made from polyvinyl chloride with and without di(2-ethylhexyl) phthalate (PVC-DEHP, PVC-NP), polyethylene (PE), natural rubber (NR) and nitrile-butadiene rubber (NBR) were investigated with respect to evaporation residue, migrated metals, migrants and residual acrylonitrile. The evaporation residue found in n-heptane was 870-1,300 ppm from PVC-DEHP and PVC-NP, which was due to the plasticizers. Most of the PE gloves had low evaporation residue levels and migrants, except for the glove designated as antibacterial, which released copper and zinc into 4% acetic acid. For the NR and NBR gloves, the evaporation residue found in 4% acetic acid was 29-180 ppm. They also released over 10 ppm of calcium and 6 ppm of zinc into 4% acetic acid, and 1.68-8.37 ppm of zinc di-ethyldithiocarbamate and zinc di-n-butyldithiocarbamate used as vulcanization accelerators into n-heptane. The acrylonitrile content was 0.40-0.94 ppm in NBR gloves.
New applications of partial residual methodology
International Nuclear Information System (INIS)
Uslu, V.R.
1999-12-01
The formulation of a problem of interest in the framework of a statistical analysis starts with collecting the data, choosing a model, making certain assumptions as described in the basic paradigm by Box (1980). This stage is is called model building. Then the estimation stage is in order by pretending as if the formulation of the problem was true to obtain estimates, to make tests and inferences. In the final stage, called diagnostic checking, checking of whether there are some disagreements between the data and the model fitted is done by using diagnostic measures and diagnostic plots. It is well known that statistical methods perform best under the condition that all assumptions related to the methods are satisfied. However it is true that having the ideal case in practice is very difficult. Diagnostics are therefore becoming important so are diagnostic plots because they provide a immediate assessment. Partial residual plots that are the main interest of the present study are playing the major role among the diagnostic plots in multiple regression analysis. In statistical literature it is admitted that partial residual plots are more useful than ordinary residual plots in detecting outliers, nonconstant variance, and especially discovering curvatures. In this study we consider the partial residual methodology in statistical methods rather than multiple regression. We have shown that for the same purpose as in the multiple regression the use of partial residual plots is possible particularly in autoregressive time series models, transfer function models, linear mixed models and ridge regression. (author)
Residual gravimetric method to measure nebulizer output.
Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice
2004-01-01
The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.
Incorporation feasibility of leather residues in bricks
Energy Technology Data Exchange (ETDEWEB)
Aguiar, J.B. [Minho Univ. (Portugal). Civil Engineering Dept.; Valente, A.; Pires, M.J. [Inst. of Development and Innovation Technology of Minho, Braga (Portugal); Tavares, T. [Biological Engineering Dept., Univ. of Minho, Braga (Portugal)
2002-07-01
The footwear industry has strips of leather as one of its by-products. These leather residues, due to their high chromium content, can be regarded as a threat to the environment, particularly if no care is taken with their disposal. With the incorporation of the residues in ceramic products, after trituration, is possible to neutralise the eventual toxicity of chromium. In a laboratory study we produced prismatic bricks using clay from the region and incorporating 1, 3 and 5% (by mass) of leather residues. This corresponds at about 20, 60 and 100% (by apparent volume). The moulds were filled up with paste and, in order to have some compactness, the ceramic paste was compressed with a spatula. After that, it began the process of drying and burning the bricks. They were tested to flexure, compression and leaching. The results showed that the toxicity of chromium disappeared in the bricks. The mechanical tests showed a decrease in strength for the specimens with leather residue. The compressive strength decreases about 22% for 1% of incorporation of leather residue. However, as bricks were lighter and more porous, we can expect that they are better for thermal isolation. (orig.)
Methods of measuring residual stresses in components
International Nuclear Information System (INIS)
Rossini, N.S.; Dassisti, M.; Benyounis, K.Y.; Olabi, A.G.
2012-01-01
Highlights: ► Defining the different methods of measuring residual stresses in manufactured components. ► Comprehensive study on the hole drilling, neutron diffraction and other techniques. ► Evaluating advantage and disadvantage of each method. ► Advising the reader with the appropriate method to use. -- Abstract: Residual stresses occur in many manufactured structures and components. Large number of investigations have been carried out to study this phenomenon and its effect on the mechanical characteristics of these components. Over the years, different methods have been developed to measure residual stress for different types of components in order to obtain reliable assessment. The various specific methods have evolved over several decades and their practical applications have greatly benefited from the development of complementary technologies, notably in material cutting, full-field deformation measurement techniques, numerical methods and computing power. These complementary technologies have stimulated advances not only in measurement accuracy and reliability, but also in range of application; much greater detail in residual stresses measurement is now available. This paper aims to classify the different residual stresses measurement methods and to provide an overview of some of the recent advances in this area to help researchers on selecting their techniques among destructive, semi destructive and non-destructive techniques depends on their application and the availabilities of those techniques. For each method scope, physical limitation, advantages and disadvantages are summarized. In the end this paper indicates some promising directions for future developments.
Drug and chemical residues in domestic animals.
Mussman, H C
1975-02-01
Given the large number of chemical substances that may find their way into the food supply, a system is needed to monitor their presence. The U. S. Department of Agriculture's Meat and Poultry Inspection Program routinely tests for chemical residues in animals coming to slaughter. Pesticides, heavy metals, growth promotants (hormones and hormonelike agents), and antibiotics are included. Samples are taken statistically so that inferences as to national incidence of residues can be drawn. When a problem is identified, a more selective sampling is designed to help follow up on the initial regulatory action. In testing for pesticides, only DDT and dieldrin are found with any frequency and their levels are decreasing; violative residues of any chlorinated hydrocarbon are generally a result of an industrial accident rather than agricultural usage. Analyses for heavy metals have revealed detectable levels of mercury, lead, and others, but none at levels that are considered a health hazard. Of the hormone or hormonelike substances, only diethylstilbestrol has been a residue problem and its future is uncertain. The most extensive monitoring for veterinary drugs is on the antimicrobials, including sulfonamides, streptomycin, and the tetracycline group of antibiotics that constitute the bulk of the violations; their simultaneous use prophylactically and therapeutically has contributed to the problem in certain cases. A strong, well-designed user education program on proper application of pesticides, chemicals, and veterinary drugs appears to be one method of reducing the incidence of unwanted residues.
Itoh, Shoji; Sugihara, Masaaki
2016-01-01
We present a theorem that defines the direction of a preconditioned system for the bi-conjugate gradient (BiCG) method, and we extend it to preconditioned bi-Lanczos-type algorithms. We show that the direction of a preconditioned system is switched by construction and by the settings of the initial shadow residual vector. We analyze and compare the polynomial structures of four preconditioned BiCG algorithms.
Golden Sine Algorithm: A Novel Math-Inspired Algorithm
Directory of Open Access Journals (Sweden)
TANYILDIZI, E.
2017-05-01
Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.
Algorithms as fetish: Faith and possibility in algorithmic work
Directory of Open Access Journals (Sweden)
Suzanne L Thomas
2018-01-01
Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.
Algebraic Algorithm Design and Local Search
National Research Council Canada - National Science Library
Graham, Robert
1996-01-01
.... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...
77 FR 24671 - Compliance Guide for Residue Prevention and Agency Testing Policy for Residues
2012-04-25
... Hazard Analysis and Critical Control Points (HACCP) inspection system, another important component of the NRP is to provide verification of residue control in HACCP systems. As part of the HACCP regulation... guide, and FSIS finds violative residues, the establishment's HACCP system may be inadequate under 9 CFR...
Improved crop residue cover estimates by coupling spectral indices for residue and moisture
Remote sensing assessment of soil residue cover (fR) and tillage intensity will improve our predictions of the impact of agricultural practices and promote sustainable management. Spectral indices for estimating fR are sensitive to soil and residue water content, therefore, the uncertainty of estima...
The U.S. Food and Drug Administration sets tolerances for veterinary drug residues in muscle, but does not specify which type of muscle should be analyzed. In order to determine if antibiotic residue levels are dependent on muscle type, 7 culled dairy cows were dosed with Penicillin G (Pen G) from ...
Methyl bromide residues in fumigated cocoa beans
International Nuclear Information System (INIS)
Adomako, D.
1975-01-01
The 14 C activity in unroasted [ 14 C]-methyl bromide fumigated cocoa beans was used to study the fate and persistence of CH 3 Br in the stored beans. About 70% of the residues occurred in the shells. Unchanged CH 3 Br could not be detected, all the sorbed CH 3 Br having reacted with bean constituents apparently to form 14 C-methylated derivatives and inorganic bromide. No 14 C activity was found in the lipid fraction. Roasting decreased the bound (non-volatile) residues, with corresponding changes in the activities and amounts of free sugars, free and protein amino acids. Roasted nibs and shells showed a two-fold increase in the volatile fraction of the 14 C residue. This fraction may be related to the volatile aroma compounds formed by Maillard-type reactions. (author)
Residual-strength determination in polymetric materials
Energy Technology Data Exchange (ETDEWEB)
Christensen, R.M.
1981-10-01
Kinetic theory of crack growth is used to predict the residual strength of polymetric materials acted upon by a previous history. Specifically, the kinetic theory is used to characterize the state of growing damage that occurs under a constant-stress (load) state. The load is removed before failure under creep-rupture conditions, and the residual instantaneous strength is determined from the theory by taking account of the damage accumulation under the preceding constant-load history. The rate of change of residual strength is found to be strongest when the duration of the preceding load history is near the ultimate lifetime under that condition. Physical explanations for this effect are given, as are numerical examples. Also, the theoretical prediction is compared with experimental data.
Management of municipal solid waste incineration residues
International Nuclear Information System (INIS)
Sabbas, T.; Polettini, A.; Pomi, R.; Astrup, T.; Hjelmar, O.; Mostbauer, P.; Cappai, G.; Magel, G.; Salhofer, S.; Speiser, C.; Heuss-Assbichler, S.; Klein, R.; Lechner, P.
2003-01-01
The management of residues from thermal waste treatment is an integral part of waste management systems. The primary goal of managing incineration residues is to prevent any impact on our health or environment caused by unacceptable particulate, gaseous and/or solute emissions. This paper provides insight into the most important measures for putting this requirement into practice. It also offers an overview of the factors and processes affecting these mitigating measures as well as the short- and long-term behavior of residues from thermal waste treatment under different scenarios. General conditions affecting the emission rate of salts and metals are shown as well as factors relevant to mitigating measures or sources of gaseous emissions
Residual strains in girth-welded linepipe
International Nuclear Information System (INIS)
MacEwen, S.R.; Holden, T.M.; Powell, B.M.; Lazor, R.B.
1987-07-01
High resolution neutron diffraction has been used to measure the axial residual strains in and adjacent to a multipass girth weld in a complete section of 914 mm (36 inches) diameter, 16 mm (5/8 inch) wall, linepipe. The experiments were carried out at the NRU reactor, Chalk River using the L3 triple-axis spectrometer. The through-wall distribution of axial residual strain was measured at 0, 4, 8, 20 and 50 mm from the weld centerline; the axial variation was determined 1, 5, 8, and 13 mm from the inside surface of the pipe wall. The results have been compared with strain gauge measurements on the weld surface and with through-wall residual stress distributions determined using the block-layering and removal technique
Residual-strength determination in polymetric materials
International Nuclear Information System (INIS)
Christensen, R.M.
1981-01-01
Kinetic theory of crack growth is used to predict the residual strength of polymetric materials acted upon by a previous history. Specifically, the kinetic theory is used to characterize the state of growing damage that occurs under a constant-stress (load) state. The load is removed before failure under creep-rupture conditions, and the residual instantaneous strength is determined from the theory by taking account of the damage accumulation under the preceding constant-load history. The rate of change of residual strength is found to be strongest when the duration of the preceding load history is near the ultimate lifetime under that condition. Physical explanations for this effect are given, as are numerical examples. Also, the theoretical prediction is compared with experimental data
Residual Defect Density in Random Disks Deposits.
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C
2015-08-03
We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.
Determination of Pesticide Residues in Cannabis Smoke
Directory of Open Access Journals (Sweden)
Nicholas Sullivan
2013-01-01
Full Text Available The present study was conducted in order to quantify to what extent cannabis consumers may be exposed to pesticide and other chemical residues through inhaled mainstream cannabis smoke. Three different smoking devices were evaluated in order to provide a generalized data set representative of pesticide exposures possible for medical cannabis users. Three different pesticides, bifenthrin, diazinon, and permethrin, along with the plant growth regulator paclobutrazol, which are readily available to cultivators in commercial products, were investigated in the experiment. Smoke generated from the smoking devices was condensed in tandem chilled gas traps and analyzed with gas chromatography-mass spectrometry (GC-MS. Recoveries of residues were as high as 69.5% depending on the device used and the component investigated, suggesting that the potential of pesticide and chemical residue exposures to cannabis users is substantial and may pose a significant toxicological threat in the absence of adequate regulatory frameworks.
Bioenergy from agricultural residues in Ghana
DEFF Research Database (Denmark)
Thomsen, Sune Tjalfe
and biomethane under Ghanaian conditions. Detailed characterisations of thirteen of the most common agricultural residues in Ghana are presented, enabling estimations of theoretical bioenergy potentials and identifying specific residues for future biorefinery applications. When aiming at residue-based ethanol...... to pursue increased implementation of anaerobic digestion in Ghana, as the first bioenergy option, since anaerobic digestion is more flexible than ethanol production with regard to both feedstock and scale of production. If possible, the available manure and municipal liquid waste should be utilised first....... A novel model for estimating BMP from compositional data of lignocellulosic biomasses is derived. The model is based on a statistical method not previously used in this area of research and the best prediction of BMP is: BMP = 347 xC+H+R – 438 xL + 63 DA , where xC+H+R is the combined content of cellulose...
Algorithmic randomness and physical entropy
International Nuclear Information System (INIS)
Zurek, W.H.
1989-01-01
Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite
Optimal Path Choice in Railway Passenger Travel Network Based on Residual Train Capacity
Directory of Open Access Journals (Sweden)
Fei Dou
2014-01-01
Full Text Available Passenger’s optimal path choice is one of the prominent research topics in the field of railway passenger transport organization. More and more different train types are available, increasing path choices from departure to destination for travelers are unstoppable. However, travelers cannot avoid being confused when they hope to choose a perfect travel plan based on various travel time and cost constraints before departure. In this study, railway passenger travel network is constructed based on train timetable. Both the generalized cost function we developed and the residual train capacity are considered to be the foundation of path searching procedure. The railway passenger travel network topology is analyzed based on residual train capacity. Considering the total travel time, the total travel cost, and the total number of passengers, we propose an optimal path searching algorithm based on residual train capacity in railway passenger travel network. Finally, the rationale of the railway passenger travel network and the optimal path generation algorithm are verified positively by case study.
Directory of Open Access Journals (Sweden)
Lizeth Mariel Casarrubias-Torres
2018-01-01
Full Text Available Mid-infrared spectroscopy and chemometric analysis were tested to determine tetracycline's residues in cow's milk. Cow's milk samples (n = 30 were spiked with tetracycline, chlortetracycline, and oxytetracycline in the range of 10-400 µg/l. Chemometric models to quantify each of the tetracycline's residues were developed by applying Partial Components Regression and Partial Least Squares algorithms. The Soft Independent Modeling of Class Analogy model was used to differentiate between pure milk and milk sample with tetracycline residues. The best models for predicting the levels of these antibiotics were obtained using Partial Least Square 1 algorithm (coefficient of determination between 0.997-0.999 and the standard error of calibration from 1.81 to 2.95. The Soft Independent Modeling of Class Analogy model showed well-separated groups allowing classification of milk samples and milk sample with antibiotics. The obtained results demonstrate the great analytical potential of chemometrics coupled with mid-infrared spectroscopy for the prediction of antibiotic in cow's milk at a concentration of microgram per litre (µg/l. This technique can be used to verify the safety of the milk rapidly and reliably.
DEFF Research Database (Denmark)
Creixell, Pau; Schoof, Erwin M.; Tan, Chris Soon Heng
2012-01-01
in terms of their mutational activity. Moreover, we highlight the importance of the genetic code and physico-chemical properties of the amino acid residues as likely causes of these inequalities and uncover serine as a mutational hot spot. Finally, we explore the consequences that these different......; it is typically assumed that all amino acid residues are equally likely to mutate or to result from a mutation. Here, by reconstructing ancestral sequences and computing mutational probabilities for all the amino acid residues, we refute this assumption and show extensive inequalities between different residues...... mutational properties have on phosphorylation site evolution, showing that a higher degree of evolvability exists for phosphorylated threonine and, to a lesser extent, serine in comparison with tyrosine residues. As exemplified by the suppression of serine's mutational activity in phosphorylation sites, our...
Natural selection and algorithmic design of mRNA.
Cohen, Barry; Skiena, Steven
2003-01-01
Messenger RNA (mRNA) sequences serve as templates for proteins according to the triplet code, in which each of the 4(3) = 64 different codons (sequences of three consecutive nucleotide bases) in RNA either terminate transcription or map to one of the 20 different amino acids (or residues) which build up proteins. Because there are more codons than residues, there is inherent redundancy in the coding. Certain residues (e.g., tryptophan) have only a single corresponding codon, while other residues (e.g., arginine) have as many as six corresponding codons. This freedom implies that the number of possible RNA sequences coding for a given protein grows exponentially in the length of the protein. Thus nature has wide latitude to select among mRNA sequences which are informationally equivalent, but structurally and energetically divergent. In this paper, we explore how nature takes advantage of this freedom and how to algorithmically design structures more energetically favorable than have been built through natural selection. In particular: (1) Natural Selection--we perform the first large-scale computational experiment comparing the stability of mRNA sequences from a variety of organisms to random synonymous sequences which respect the codon preferences of the organism. This experiment was conducted on over 27,000 sequences from 34 microbial species with 36 genomic structures. We provide evidence that in all genomic structures highly stable sequences are disproportionately abundant, and in 19 of 36 cases highly unstable sequences are disproportionately abundant. This suggests that the stability of mRNA sequences is subject to natural selection. (2) Artificial Selection--motivated by these biological results, we examine the algorithmic problem of designing the most stable and unstable mRNA sequences which code for a target protein. We give a polynomial-time dynamic programming solution to the most stable sequence problem (MSSP), which is asymptotically no more complex
Contact-impact algorithms on parallel computers
International Nuclear Information System (INIS)
Zhong Zhihua; Nilsson, Larsgunnar
1994-01-01
Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))
A review on quantum search algorithms
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
Residual generator for cardiovascular anomalies detection
Belkhatir, Zehor
2014-06-01
This paper discusses the possibility of using observer-based approaches for cardiovascular anomalies detection and isolation. We consider a lumped parameter model of the cardiovascular system that can be written in a form of nonlinear state-space representation. We show that residuals that are sensitive to variations in some cardiovascular parameters and to abnormal opening and closure of the valves, can be generated. Since the whole state is not easily available for measurement, we propose to associate the residual generator to a robust extended kalman filter. Numerical results performed on synthetic data are provided.
Lindane residues in fish inhabiting Nigerian rivers
International Nuclear Information System (INIS)
Okereke, G.U.; Dje, Y.
1997-01-01
Analysis for residues of lindane in fish collected from various rivers close to rice agroecosystems showed that the concentrations of lindane ranged from none detectable to 3.4 mg kg -1 . Fish from rivers where strict regulations prohibits its use had no detectable lindane residues while appreciable amounts of lindane were found in fish were such restriction was not enforced with the variation attributed to the extent of use of lindane in the area of contamination. The investigation confirms that the use of lindane in rice production in Nigeria can cause the contamination of fish in nearby rivers. (author). 16 refs, 2 tab
Fluidised-bed combustion of gasification residue
Energy Technology Data Exchange (ETDEWEB)
Korpela, T.; Kudjoi, A.; Hippinen, I.; Heinolainen, A.; Suominen, M.; Lu Yong [Helsinki Univ. of Technology (Finland). Lab of Energy Economics and Power Plant Engineering
1996-12-01
Partial gasification processes have been presented as possibilities for future power production. In the processes, the solid materials removed from a gasifier (i.e. fly ash and bed material) contain unburnt fuel and the fuel conversion is increased by burning this gasification residue either in an atmospheric or a pressurised fluidised-bed. In this project, which is a part of European JOULE 2 EXTENSION research programme, the main research objectives are the behaviour of calcium and sulphur compounds in solids and the emissions of sulphur dioxide and nitrogen oxides (NO{sub x} and N{sub 2}O) in pressurised fluidised-bed combustion of gasification residues. (author)
Fate of leptophos residues in milk products
International Nuclear Information System (INIS)
Zayed, S.M.A.D.; Mohammed, S.I.
1981-01-01
The fate of leptophos residues in various milk products was studied using 14 C-phenyl labelled leptophos. Milk products were prepared from milk fortified with the radioactive insecticide by methods simulating those used in industry. The highest leptophos level was found in butter and the lowest in skim milk and whey. Analysis of the radioactive residues in all products showed the presence of leptophos alone. A trace of the oxon could be detected in whey. The results obtained in this investigation indicated that processing of milk did not affect the nature of leptophos to any appreciable extent. (author)
Residual stress in Ni-W electrodeposits
DEFF Research Database (Denmark)
Mizushima, Io; Tang, Peter Torben; Hansen, Hans Nørgaard
2006-01-01
In the present work, the residual stress in Ni–W layers electrodeposited from electrolytes based on NiSO4 and Na2WO4, is investigated. Citrate, glycine and triethanolamine were used as complexing agents, enabling complex formation between the nickel ion and tungstate. The results show that the type...... of complexing agent and the current efficiency have an influence on the residual stress. In all cases, an increase in tensile stress in the deposit with time after deposition was observed. Pulse plating could improve the stress level for the electrolyte containing equal amounts of citrate...
Residual radioactivity of treated green diamonds.
Cassette, Philippe; Notari, Franck; Lépy, Marie-Christine; Caplan, Candice; Pierre, Sylvie; Hainschwang, Thomas; Fritsch, Emmanuel
2017-08-01
Treated green diamonds can show residual radioactivity, generally due to immersion in radium salts. We report various activity measurements on two radioactive diamonds. The activity was characterized by alpha and gamma ray spectrometry, and the radon emanation was measured by alpha counting of a frozen source. Even when no residual radium contamination can be identified, measurable alpha and high-energy beta emissions could be detected. The potential health impact of radioactive diamonds and their status with regard to the regulatory policy for radioactive products are discussed. Copyright © 2017. Published by Elsevier Ltd.
Residual water treatment for gamma radiation
International Nuclear Information System (INIS)
Mendez, L.
1990-01-01
The treatment of residual water by means of gamma radiation for its use in agricultural irrigation is evaluated. Measurements of physical, chemical, biological and microbiological contamination indicators were performed. For that, samples from the treatment center of residual water of San Juan de Miraflores were irradiated up to a 52.5 kGy dose. The study concludes that gamma radiation is effective to remove parasites and bacteria, but not for removal of the organic and inorganic matter. (author). 15 refs., 3 tabs., 4 figs
Some problems of residual activity measurements
International Nuclear Information System (INIS)
Katrik, P.; Mustafin, E.; Strasik, I.; Pavlovic, M.
2013-01-01
As a preparatory work for constructing the Facility for Antiproton and Ion Research (FAIR) at GSI Darmstadt, samples of copper were irradiated by 500 MeV/u 238 U ion beam and investigated by gamma-ray spectroscopy. The nuclides that contribute dominantly to the residual activity have been identified and their contributions have been quantified by two different methods: from the whole-target gamma spectra and by integration of depth-profiles of residual activity of individual nuclides. Results obtained by these two methods are compared and discussed in this paper. (authors)
Color center formation in plutonium electrorefining residues
International Nuclear Information System (INIS)
Morris, D.E.; Eller, P.G.; Hobart, D.E.; Eastman, M.P.; McCurry, L.E.
1989-01-01
Plutonium electrorefining residues containing Pu(III) in KCl exhibit dramatic reversible, light-induced color changes. Similar color changes were observed in Ln-doped (Ln = La, Nd, Gd, and Lu) and undoped KCl samples which were subjected to intense gamma irradiation. Diffuse reflectance electronic and electron paramagnetic resonance spectroscopies were used to show conclusively that Pu(III) is present in both the bleached and unbleached plutonium-bearing residues and the spectacular color changes are the result of color center formation and alternation by visible light. (orig.)
Residual and Destroyed Accessible Information after Measurements
Han, Rui; Leuchs, Gerd; Grassl, Markus
2018-04-01
When quantum states are used to send classical information, the receiver performs a measurement on the signal states. The amount of information extracted is often not optimal due to the receiver's measurement scheme and experimental apparatus. For quantum nondemolition measurements, there is potentially some residual information in the postmeasurement state, while part of the information has been extracted and the rest is destroyed. Here, we propose a framework to characterize a quantum measurement by how much information it extracts and destroys, and how much information it leaves in the residual postmeasurement state. The concept is illustrated for several receivers discriminating coherent states.
Residual dust charges in discharge afterglow
International Nuclear Information System (INIS)
Coueedel, L.; Mikikian, M.; Boufendi, L.; Samarian, A. A.
2006-01-01
An on-ground measurement of dust-particle residual charges in the afterglow of a dusty plasma was performed in a rf discharge. An upward thermophoretic force was used to balance the gravitational force. It was found that positively charged, negatively charged, and neutral dust particles coexisted for more than 1 min after the discharge was switched off. The mean residual charge for 200-nm-radius particles was measured. The dust particle mean charge is about -5e at a pressure of 1.2 mbar and about -3e at a pressure of 0.4 mbar
Rigid Residue Scan Simulations Systematically Reveal Residue Entropic Roles in Protein Allostery.
Directory of Open Access Journals (Sweden)
Robert Kalescky
2016-04-01
Full Text Available Intra-protein information is transmitted over distances via allosteric processes. This ubiquitous protein process allows for protein function changes due to ligand binding events. Understanding protein allostery is essential to understanding protein functions. In this study, allostery in the second PDZ domain (PDZ2 in the human PTP1E protein is examined as model system to advance a recently developed rigid residue scan method combining with configurational entropy calculation and principal component analysis. The contributions from individual residues to whole-protein dynamics and allostery were systematically assessed via rigid body simulations of both unbound and ligand-bound states of the protein. The entropic contributions of individual residues to whole-protein dynamics were evaluated based on covariance-based correlation analysis of all simulations. The changes of overall protein entropy when individual residues being held rigid support that the rigidity/flexibility equilibrium in protein structure is governed by the La Châtelier's principle of chemical equilibrium. Key residues of PDZ2 allostery were identified with good agreement with NMR studies of the same protein bound to the same peptide. On the other hand, the change of entropic contribution from each residue upon perturbation revealed intrinsic differences among all the residues. The quasi-harmonic and principal component analyses of simulations without rigid residue perturbation showed a coherent allosteric mode from unbound and bound states, respectively. The projection of simulations with rigid residue perturbation onto coherent allosteric modes demonstrated the intrinsic shifting of ensemble distributions supporting the population-shift theory of protein allostery. Overall, the study presented here provides a robust and systematic approach to estimate the contribution of individual residue internal motion to overall protein dynamics and allostery.
40 CFR 180.564 - Indoxacarb; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Indoxacarb; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.564 Indoxacarb; tolerances for residues. (a) General. Tolerances are established for residues of...
Computational geometry algorithms and applications
de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried
1997-01-01
Computational geometry emerged from the field of algorithms design and anal ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...
The Chandra Source Catalog: Algorithms
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Defining an essence of structure determining residue contacts in proteins.
Sathyapriya, R; Duarte, Jose M; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2009-12-01
The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this "structural essence" has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts-such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed "cone-peeling" that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 A Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This "structural essence" opens new avenues in the
Chen, Peng
2013-07-23
Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.
Chen, Peng; Li, Jinyan; Limsoon, Wong; Kuwahara, Hiroyuki; Huang, Jianhua Z.; Gao, Xin
2013-01-01
Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.
Minimal residual cone-beam reconstruction with attenuation correction in SPECT
International Nuclear Information System (INIS)
La, Valerie; Grangeat, Pierre
1998-01-01
This paper presents an iterative method based on the minimal residual algorithm for tomographic attenuation compensated reconstruction from attenuated cone-beam projections given the attenuation distribution. Unlike conjugate-gradient based reconstruction techniques, the proposed minimal residual based algorithm solves directly a quasisymmetric linear system, which is a preconditioned system. Thus it avoids the use of normal equations, which improves the convergence rate. Two main contributions are introduced. First, a regularization method is derived for quasisymmetric problems, based on a Tikhonov-Phillips regularization applied to the factorization of the symmetric part of the system matrix. This regularization is made spatially adaptive to avoid smoothing the region of interest. Second, our existing reconstruction algorithm for attenuation correction in parallel-beam geometry is extended to cone-beam geometry. A circular orbit is considered. Two preconditioning operators are proposed: the first one is Grangeat's inversion formula and the second one is Feldkamp's inversion formula. Experimental results obtained on simulated data are presented and the shadow zone effect on attenuated data is illustrated. (author)
Chen, Peng; Li, Jinyan; Wong, Limsoon; Kuwahara, Hiroyuki; Huang, Jianhua Z; Gao, Xin
2013-08-01
Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. Copyright © 2013 Wiley Periodicals, Inc.
Quantum walks and search algorithms
Portugal, Renato
2013-01-01
This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...
Gossip algorithms in quantum networks
International Nuclear Information System (INIS)
Siomau, Michael
2017-01-01
Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.
Universal algorithm of time sharing
International Nuclear Information System (INIS)
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Algorithms for Decision Tree Construction
Chikalov, Igor
2011-01-01
The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.
Scalable algorithms for contact problems
Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít
2016-01-01
This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...
Fault Tolerant External Memory Algorithms
DEFF Research Database (Denmark)
Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas
2009-01-01
Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...
Gossip algorithms in quantum networks
Energy Technology Data Exchange (ETDEWEB)
Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)
2017-01-23
Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.
Next Generation Suspension Dynamics Algorithms
Energy Technology Data Exchange (ETDEWEB)
Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-12-01
This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.
Algorithms for Protein Structure Prediction
DEFF Research Database (Denmark)
Paluszewski, Martin
-trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Management of industrial solid residues; Gerenciamento de residuos solidos industriais
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-07-01
This chapter gives an overview on the management of industrial solid wastes, approaching the following subjects: classification of industrial solid residues; directives and methodologies for the management of industrial solid residues; instruments for the management of industrial solid residues; handling, packing, storage and transportation; treatment of industrial solid residues; final disposal - landfill for industrial residues; the problem of treatment and final disposer of domestic garbage in Brazil; recycling of the lubricant oils used in brazil; legislation.
Corn residue removal and CO2 emissions
Carbon dioxide (CO2), nitrous oxide (N2O), and methane (CH4) are the primary greenhouse gases (GHG) emitted from the soil due to agricultural activities. In the short-term, increases in CO2 emissions indicate increased soil microbial activity. Soil micro-organisms decompose crop residues and release...
Preliminary characterization of residual biomass from Hibiscus ...
African Journals Online (AJOL)
Hibiscus sabdariffa calyces are mainly used for different agro-food and beverages applications. The residual biomass generated contains various useful substances that were extracted and characterized. It contained 23% (w/w) soluble pectic material, a food additive, extracted with hot acidified water (80°C, pH = 1.5) and ...
Residual strength evaluation of concrete structural components ...
Indian Academy of Sciences (India)
This paper presents methodologies for residual strength evaluation of concrete structural components using linear elastic and nonlinear fracture mechanics principles. The effect of cohesive forces due to aggregate bridging has been represented mathematically by employing tension softening models. Various tension ...
Residual stresses in plastic random systems
Alava, M.J.; Karttunen, M.E.J.; Niskanen, K.J.
1995-01-01
We show that yielding in elastic plastic materials creates residual stresses when local disorder is present. The intensity of these stresses grows with the external stress and degree of initial disorder. The one-dimensional model we employ also yields a discontinuous transition to perfect plasticity
Recent advances in residual stress measurement
International Nuclear Information System (INIS)
Withers, P.J.; Turski, M.; Edwards, L.; Bouchard, P.J.; Buttle, D.J.
2008-01-01
Until recently residual stresses have been included in structural integrity assessments of nuclear pressure vessels and piping in a very primitive manner due to the lack of reliable residual stress measurement or prediction tools. This situation is changing the capabilities of newly emerging destructive (i.e. the contour method) and non-destructive (i.e. magnetic and high-energy synchrotron X-ray strain mapping) residual stress measurement techniques for evaluating ferritic and austenitic pressure vessel components are contrasted against more well-established methods. These new approaches offer the potential for obtaining area maps of residual stress or strain in welded plants, mock-up components or generic test-pieces. The mapped field may be used directly in structural integrity calculations, or indirectly to validate finite element process/structural models on which safety cases for pressurised nuclear systems are founded. These measurement methods are complementary in terms of application to actual plant, cost effectiveness and measurements in thick sections. In each case an exemplar case study is used to illustrate the method and to highlight its particular capabilities
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Vitrification for stability of scrap and residue
Energy Technology Data Exchange (ETDEWEB)
Forsberg, C.W. [Oak Ridge National Lab., TN (United States)
1996-05-01
A conference breakout discussion was held on the subject of vitrification for stabilization of plutonium scrap and residue. This was one of four such sessions held within the vitrification workshop for participants to discuss specific subjects in further detail. The questions and issues were defined by the participants.
Thermal Adsorption Processing Of Hydrocarbon Residues
Directory of Open Access Journals (Sweden)
Sudad H. Al.
2017-04-01
Full Text Available The raw materials of secondary catalytic processes must be pre-refined. Among these refining processes are the deasphalting and demetallization including their thermo adsorption or thermo-contact adsorption variety. In oil processing four main processes of thermo-adsorption refining of hydrocarbon residues are used ART Asphalt Residual Treating - residues deasphaltizing 3D Discriminatory Destructive Distillation developed in the US ACT Adsorption-Contact Treatment and ETCC Express Thermo-Contact Cracking developed in Russia. ART and ACT are processes with absorbers of lift type reactor while 3D and ETCC processes are with an adsorbing reactor having ultra-short contact time of the raw material with the adsorbent. In all these processes refining of hydrocarbon residues is achieved by partial Thermo-destructive transformations of hydrocarbons and hetero-atomic compounds with simultaneous adsorption of the formed on the surface of the adsorbents resins asphaltene and carboids as well as metal- sulphur - and nitro-organic compounds. Demetallized and deasphalted light and heavy gas oils or their mixtures are a quality raw material for secondary deepening refining processes catalytic and hydrogenation cracking etc. since they are characterized by low coking ability and low content of organometallic compounds that lead to irreversible deactivation of the catalysts of these deepening processes.
EFFECTS OF MUCUNA ( MUCUNA UTILIS L.) RESIDUE ...
African Journals Online (AJOL)
The field experiment was conducted at two locations: University of Agriculture, Abeokuta (UNAAB) and Olowo-Papa (OP) in Ogun state both in Forest-savannah transition zone of Nigeria to investigate the response of three upland rice cultivars (O.sativa) to mucuna residue incorporation and Nitrogen (N) fertilizer and the ...
The measurement of residual stresses in claddings
International Nuclear Information System (INIS)
Hofer, G.; Bender, N.
1978-01-01
The ring core method, a variation of the hole drilling method for the measurement of biaxial residual stresses, has been extended to measure stresses from depths of about 5 to 25mm. It is now possible to measure the stress profiles of clad material. Examples of measured stress profiles are shown and compared with those obtained with a sectioning technique. (author)