A new adaptive GMRES algorithm for achieving high accuracy
Energy Technology Data Exchange (ETDEWEB)
Sosonkina, M.; Watson, L.T.; Kapania, R.K. [Virginia Polytechnic Inst., Blacksburg, VA (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
The Role Eigenvalues Play in Forming GMRES Residual Norms with Non-Normal Matrices
Czech Academy of Sciences Publication Activity Database
Meurant, G.; Duintjer Tebbens, Jurjen
2015-01-01
Roč. 68, č. 1 (2015), s. 143-165 ISSN 1017-1398 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : GMRES convergence * non-normal matrix * eigenvalues * residual norms Subject RIV: BA - General Mathematics Impact factor: 1.366, year: 2015
A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition
De Sterck, Hans
2011-01-01
A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear g...
Some observations on weighted GMRES
Güttel, Stefan
2014-01-10
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.
Energy Technology Data Exchange (ETDEWEB)
Kelley, C.T.; Xue, Z.Q. [North Carolina State Univ., Raleigh, NC (United States)
1994-12-31
Many discretizations of integral equations and compact fixed point problems are collectively compact and strongly convergent in spaces of continuous functions. These properties not only lead to stable and convergent approximations but also can be used in the construction of fast multilevel algorithms. Recently the GMRES algorithm has become a standard coarse mesh solver. The purpose of this paper is to show how the special properties of integral operators and their approximations are reflected in the performance of the GMRES iteration and how these properties can be used to strengthen the norm in which convergence takes place. The authors illustrate these ideas with composite Gauss rules for integral equations on the unit interval.
Smoothing-norm preconditioning for GMRES
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Toke Koldborg
2004-01-01
When GMRES is applied to a discrete ill-posed problem with a square matrix, then the iterates can be considered as regularized solutions. We show how to precondition GMRES in such a way that the iterations take into account a smoothing norm for the solution. This technique is well established...... for CGLS, but it does not apply directly to GMRES. We develop a similar technique that works for GMRES, without the need for modifications of the smoothing norm, and which preserves symmetry if the coefficient matrix is symmetric. We also discuss the efficient implementation of our algorithm, and we...
Application of preconditioned GMRES to the numerical solution of the neutron transport equation
International Nuclear Information System (INIS)
Patton, B.W.; Holloway, J.P.
2002-01-01
The generalized minimal residual (GMRES) method with right preconditioning is examined as an alternative to both standard and accelerated transport sweeps for the iterative solution of the diamond differenced discrete ordinates neutron transport equation. Incomplete factorization (ILU) type preconditioners are used to determine their effectiveness in accelerating GMRES for this application. ILU(τ), which requires the specification of a dropping criteria τ, proves to be a good choice for the types of problems examined in this paper. The combination of ILU(τ) and GMRES is compared with both DSA and unaccelerated transport sweeps for several model problems. It is found that the computational workload of the ILU(τ)-GMRES combination scales nonlinearly with the number of energy groups and quadrature order, making this technique most effective for problems with a small number of groups and discrete ordinates. However, the cost of preconditioner construction can be amortized over several calculations with different source and/or boundary values. Preconditioners built upon standard transport sweep algorithms are also evaluated as to their effectiveness in accelerating the convergence of GMRES. These preconditioners show better scaling with such problem parameters as the scattering ratio, the number of discrete ordinates, and the number of spatial meshes. These sweeps based preconditioners can also be cast in a matrix free form that greatly reduces storage requirements
On Investigating GMRES Convergence using Unitary Matrices
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.; Sadok, H.; Strakoš, Z.
2014-01-01
Roč. 450, 1 June (2014), s. 83-107 ISSN 0024-3795 Grant - others:GA AV ČR(CZ) M100301201; GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : GMRES convergence * unitary matrices * unitary spectra * normal matrices * Krylov residual subspace * Schur parameters Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014
The Upper Bound for GMRES on Normal Tridiagonal Toeplitz Linear System
Directory of Open Access Journals (Sweden)
R. Doostaki∗
2015-09-01
Full Text Available The Generalized Minimal Residual method (GMRES is often used to solve a large and sparse system Ax = b. This paper establishes error bound for residuals of GMRES on solving an N × N normal tridiagonal Toeplitz linear system. This problem has been studied previously by Li [R.-C. Li, Convergence of CG and GMRES on a tridiagonal Toeplitz linear system, BIT 47 (3 (2007 577-599.], for two special right-hand sides b = e1, eN . Also, Li and Zhang [R.-C. Li, W. Zhang, The rate of convergence of GMRES on a tridiagonal Toeplitz linear system, Numer. Math. 112 (2009 267-293.] for non-symmetric matrix A, presented upper bound for GMRES residuals. But in this paper we establish the upper bound on normal tridiagonal Toeplitz linear systems for special right-hand sides b = b(lel, for 1 l N
Iterative methods for solving Ax=b, GMRES/FOM versus QMR/BiCG
Energy Technology Data Exchange (ETDEWEB)
Cullum, J. [IBM Research Division, Yorktown Heights, NY (United States)
1996-12-31
We study the convergence of GMRES/FOM and QMR/BiCG methods for solving nonsymmetric Ax=b. We prove that given the results of a BiCG computation on Ax=b, we can obtain a matrix B with the same eigenvalues as A and a vector c such that the residual norms generated by a FOM computation on Bx=c are identical to those generated by the BiCG computations. Using a unitary equivalence for each of these methods, we obtain test problems where we can easily vary certain spectral properties of the matrices. We use these test problems to study the effects of nonnormality on the convergence of GMRES and QMR, to study the effects of eigenvalue outliers on the convergence of QMR, and to compare the convergence of restarted GMRES, QMR, and BiCGSTAB across a family of normal and nonnormal problems. Our GMRES tests on nonnormal test matrices indicate that nonnormality can have unexpected effects upon the residual norm convergence, giving misleading indications of superior convergence over QMR when the error norms for GMRES are not significantly different from those for QMR. Our QMR tests indicate that the convergence of the QMR residual and error norms is influenced predominantly by small and large eigenvalue outliers and by the character, real, complex, or nearly real, of the outliers and the other eigenvalues. In our comparison tests QMR outperformed GMRES(10) and GMRES(20) on both the normal and nonnormal test matrices.
Right-Hand Side Dependent Bounds for GMRES Applied to Ill-Posed Problems
Pestana, Jennifer
2014-01-01
© IFIP International Federation for Information Processing 2014. In this paper we apply simple GMRES bounds to the nearly singular systems that arise in ill-posed problems. Our bounds depend on the eigenvalues of the coefficient matrix, the right-hand side vector and the nonnormality of the system. The bounds show that GMRES residuals initially decrease, as residual components associated with large eigenvalues are reduced, after which semi-convergence can be expected because of the effects of small eigenvalues.
A block variant of the GMRES method on massively parallel processors
Energy Technology Data Exchange (ETDEWEB)
Li, Guangye [Cray Research, Inc., Eagan, MN (United States)
1996-12-31
This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.
Any Admissible Harmonic Ritz Value Set is Possible for GMRES
Czech Academy of Sciences Publication Activity Database
Du, K.; Duintjer Tebbens, Jurjen; Meurant, G.
2017-01-01
Roč. 47, September 18 (2017), s. 37-56 ISSN 1068-9613 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : Ritz values * harmonic Ritz values * GMRES convergence * prescribed residual norms * FOM convergence Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.925, year: 2016 http://etna.mcs.kent.edu/volumes/2011-2020/vol47/abstract.php?vol=47&pages=37-56
Properties of Worst-Case GMRES
Czech Academy of Sciences Publication Activity Database
Faber, V.; Liesen, J.; Tichý, Petr
2013-01-01
Roč. 34, č. 4 (2013), s. 1500-1519 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Grant - others:GA AV ČR(CZ) M10041090 Institutional support: RVO:67985807 Keywords : GMRES method * worst-case convergence * ideal GMRES * matrix approximation problems * minmax Subject RIV: BA - General Mathematics Impact factor: 1.806, year: 2013
New convergence results on the global GMRES method for diagonalizable matrices
Bellalij, M.; Jbilou, K.; Sadok, H.
2008-10-01
In the present paper, we give some new convergence results of the global GMRES method for multiple linear systems. In the case where the coefficient matrix A is diagonalizable, we derive new upper bounds for the Frobenius norm of the residual. We also consider the case of normal matrices and we propose new expressions for the norm of the residual.
Minimal residual method stronger than polynomial preconditioning
Energy Technology Data Exchange (ETDEWEB)
Faber, V.; Joubert, W.; Knill, E. [Los Alamos National Lab., NM (United States)] [and others
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Prescribing the behavior of early terminating GMRES and Arnoldi iterations
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.
2014-01-01
Roč. 65, č. 1 (2014), s. 69-90 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA AV ČR(CZ) M100301201 Institutional research plan: CEZ:AV0Z10300504 Keywords : Arnoldi process * early termination * GMRES method * prescribed GMRES convergence * Arnoldi method * prescribed Ritz values Subject RIV: BA - General Mathematics Impact factor: 1.417, year: 2014
Deflation of Eigenvalues for GMRES in Lattice QCD
International Nuclear Information System (INIS)
Morgan, Ronald B.; Wilcox, Walter
2002-01-01
Versions of GMRES with deflation of eigenvalues are applied to lattice QCD problems. Approximate eigenvectors corresponding to the smallest eigenvalues are generated at the same time that linear equations are solved. The eigenvectors improve convergence for the linear equations, and they help solve other right-hand sides
R3GMRES: including prior information in GMRES-type methods for discrete inverse problems
DEFF Research Database (Denmark)
Dong, Yiqiu; Garde, Henrik; Hansen, Per Christian
2014-01-01
Lothar Reichel and his collaborators proposed several iterative algorithms that augment the underlying Krylov subspace with an additional low-dimensional subspace in order to produce improved regularized solutions. We take a closer look at this approach and investigate a particular Regularized Ra...
The performances of R GPU implementations of the GMRES method
Directory of Open Access Journals (Sweden)
Bogdan Oancea
2018-03-01
Full Text Available Although the performance of commodity computers has improved drastically with the introduction of multicore processors and GPU computing, the standard R distribution is still based on single-threaded model of computation, using only a small fraction of the computational power available now for most desktops and laptops. Modern statistical software packages rely on high performance implementations of the linear algebra routines there are at the core of several important leading edge statistical methods. In this paper we present a GPU implementation of the GMRES iterative method for solving linear systems. We compare the performance of this implementation with a pure single threaded version of the CPU. We also investigate the performance of our implementation using different GPU packages available now for R such as gmatrix, gputools or gpuR which are based on CUDA or OpenCL frameworks.
Quantum Algorithms for Weighing Matrices and Quadratic Residues
van Dam, Wim
2000-01-01
In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is ignificantly lower than the classical one. It is pointed out that this scheme captures both Bernstein & Vazirani's inner-product protocol, as well as Grover's search algorithm. In the second part of the ar...
Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems
Czech Academy of Sciences Publication Activity Database
Morikuni, Keiichi; Hayami, K.
2015-01-01
Roč. 36, č. 1 (2015), s. 225-250 ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.883, year: 2015
A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images
Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong
2010-10-01
Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm.
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-03-27
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. The convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance over all of the other compared CNN-based SR algorithms in this work. © 2018 Institute of Physics and Engineering in Medicine.
IRAM combined with multi-group GMRES for solving Matrix MOC
International Nuclear Information System (INIS)
Wu Wenbin; Li Qing; Wang Kan
2014-01-01
In the Matrix MOC, a linear algebraic equation system can be constructed by sweeping only once, and then solving the linear system takes the place of repeatedly characteristics sweeping. In neutron transport critical problems, k eff is traditionally computed by power iteration (PI), whose convergence rate is deeply dependent on the dominance ratio. Large problems of practical interest often have dominance ratios close to 1, leading to slow convergence of PI. In this study, k eff is computed by the Implicitly Restarted Arnoldi Method (IRAM) combined with multi-group GMRES, in which multi-group problems coupled by upscatter are solved directly, avoiding upscatter iteration. Numerical results of several benchmarks such as 2D C5G7 demonstrate that IRAM combined with multi-group GMRES can obtain good accuracy and higher efficiency compared with PI. (authors)
Any Ritz Value Behavior Is Possible for Arnoldi and for GMRES
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Meurant, G.
2012-01-01
Roč. 33, č. 3 (2012), s. 958-978 ISSN 0895-4798 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA AV ČR(CZ) M100300901 Institutional research plan: CEZ:AV0Z10300504 Keywords : Ritz values * Arnoldi process * Arnoldi method * GMRES method * prescribed convergence * interlacing properties Subject RIV: BA - General Mathematics Impact factor: 1.342, year: 2012
An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues
Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis
2011-01-01
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966
Directory of Open Access Journals (Sweden)
M. Susmikanti
2015-12-01
Full Text Available In a nuclear industry area, high temperature treatment of materials is a factor which requires special attention. Assessment needs to be conducted on the properties of the materials used, including the strength of the materials. The measurement of material properties under thermal processes may reflect residual stresses. The use of Genetic Algorithm (GA to determine the optimal residual stress is one way to determine the strength of a material. In residual stress modeling with several parameters, it is sometimes difficult to solve for the optimal value through analytical or numerical calculations. Here, GA is an efficient algorithm which can generate the optimal values, both minima and maxima. The purposes of this research are to obtain the optimization of variable in residual stress models using GA and to predict the center of residual stress distribution, using fuzzy neural network (FNN while the artificial neural network (ANN used for modeling. In this work a single-material 316/316L stainless steel bar is modeled. The minimal residual stresses of the material at high temperatures were obtained with GA and analytical calculations. At a temperature of 6500C, the GA optimal residual stress estimation converged at –711.3689 MPa at adistance of 0.002934 mm from center point, whereas the analytical calculation result at that temperature and position is -975.556 MPa . At a temperature of 8500C, the GA result was -969.868 MPa at 0.002757 mm from the center point, while with analytical result was -1061.13 MPa. The difference in residual stress between GA and analytical results at a temperatureof6500C is about 27 %, while at 8500C it is 8.67 %. The distribution of residual stress showed a grouping concentrated around a coordinate of (-76; 76 MPa. The residuals stress model is a degree-two polynomial with coefficients of 50.33, -76.54, and -55.2, respectively, with a standard deviation of 7.874.
Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei
2012-11-01
Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.
Directory of Open Access Journals (Sweden)
Xiaoxia Yang
Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Tensor-GMRES method for large sparse systems of nonlinear equations
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
Multiple solutions to dense systems in radar scattering using a preconditioned block GMRES solver
Energy Technology Data Exchange (ETDEWEB)
Boyse, W.E. [Advanced Software Resources, Inc., Santa Clara, CA (United States)
1996-12-31
Multiple right-hand sides occur in radar scattering calculations in the computation of the simulated radar return from a body at a large number of angles. Each desired angle requires a right-hand side vector to be computed and the solution generated. These right-hand sides are naturally smooth functions of the angle parameters and this property is utilized in a novel way to compute solutions an order of magnitude faster than LINPACK The modeling technique addressed is the Method of Moments (MOM), i.e. a boundary element method for time harmonic Maxwell`s equations. Discretization by this method produces general complex dense systems of rank 100`s to 100,000`s. The usual way to produce the required multiple solutions is via LU factorization and solution routines such as found in LINPACK. Our method uses the block GMRES iterative method to directly iterate a subset of the desired solutions to convergence.
Energy Technology Data Exchange (ETDEWEB)
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Zhou, Xin; Jun, Sun; Zhang, Bing; Jun, Wu
2017-07-01
In order to improve the reliability of the spectrum feature extracted by wavelet transform, a method combining wavelet transform (WT) with bacterial colony chemotaxis algorithm and support vector machine (BCC-SVM) algorithm (WT-BCC-SVM) was proposed in this paper. Besides, we aimed to identify different kinds of pesticide residues on lettuce leaves in a novel and rapid non-destructive way by using fluorescence spectra technology. The fluorescence spectral data of 150 lettuce leaf samples of five different kinds of pesticide residues on the surface of lettuce were obtained using Cary Eclipse fluorescence spectrometer. Standard normalized variable detrending (SNV detrending), Savitzky-Golay coupled with Standard normalized variable detrending (SG-SNV detrending) were used to preprocess the raw spectra, respectively. Bacterial colony chemotaxis combined with support vector machine (BCC-SVM) and support vector machine (SVM) classification models were established based on full spectra (FS) and wavelet transform characteristics (WTC), respectively. Moreover, WTC were selected by WT. The results showed that the accuracy of training set, calibration set and the prediction set of the best optimal classification model (SG-SNV detrending-WT-BCC-SVM) were 100%, 98% and 93.33%, respectively. In addition, the results indicated that it was feasible to use WT-BCC-SVM to establish diagnostic model of different kinds of pesticide residues on lettuce leaves.
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
Indian Academy of Sciences (India)
have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Indian Academy of Sciences (India)
algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.
Indian Academy of Sciences (India)
In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...
Wang, Lincong; Donald, Bruce Randall
2004-01-01
We have developed an ab initio algorithm for determining a protein backbone structure using global orientational restraints on internuclear vectors derived from residual dipolar couplings (RDCs) measured in one or two different aligning media by solution nuclear magnetic resonance (NMR) spectroscopy [14, 15]. Specifically, the conformation and global orientations of individual secondary structure elements are computed, independently, by an exact solution, systematic search-based minimization algorithm using only 2 RDCs per residue. The systematic search is built upon a quartic equation for computing, exactly and in constant time, the directions of an internuclear vector from RDCs, and linear or quadratic equations for computing the sines and cosines of backbone dihedral (phi, psi) angles from two vectors in consecutive peptide planes. In contrast to heuristic search such as simulated annealing (SA) or Monte-Carlo (MC) used by other NMR structure determination algorithms, our minimization algorithm can be analyzed rigorously in terms of expected algorithmic complexity and the coordinate precision of the protein structure as a function of error in the input data. The algorithm has been successfully applied to compute the backbone structures of three proteins using real NMR data.
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite
Indian Academy of Sciences (India)
In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...
Indian Academy of Sciences (India)
algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...
Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.
2017-12-01
A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build
Sadok, H.
1999-08-01
The Generalized Minimal Residual (GMRES) method and the Quasi-Minimal Residual (QMR) method are two Krylov methods for solving linear systems. The main difference between these methods is the generation of the basis vectors for the Krylov subspace. The GMRES method uses the Arnoldi process while QMR uses the Lanczos algorithm for constructing a basis of the Krylov subspace. In this paper we give a new method similar to QMR but based on the Hessenberg process instead of the Lanczos process. We call the new method the CMRH method. The CMRH method is less expensive and requires slightly less storage than GMRES. Numerical experiments suggest that it has behaviour similar to GMRES.
Development of iterative techniques for the solution of unsteady compressible viscous flows
Sankar, Lakshmi; Hixon, Duane
1993-01-01
The work done under this project was documented in detail as the Ph. D. dissertation of Dr. Duane Hixon. The objectives of the research project were evaluation of the generalized minimum residual method (GMRES) as a tool for accelerating 2-D and 3-D unsteady flows and evaluation of the suitability of the GMRES algorithm for unsteady flows, computed on parallel computer architectures.
Liu, Ya-Juan; Wu, Hai-Long; Kang, Chao; Gu, Hui-Wen; Nie, Jin-Fang; Li, Shan-Shan; Su, Zhi-Yi; Yu, Ru-Qin
2012-01-01
A novel algorithm, four-way self-weighted alternating normalized residue fitting (SWANRF), which is an extension of its three-way form, for the decomposition of quadrilinear data with new weight factors, was proposed and applied to the quantitative analysis of serotonin contents in plasma samples. It was observed that the third-order calibration could not only retain a "second-order advantage" and but also obtain other advantages. The introduction of a fourth mode can relieve the serious problem of collinearity, which seems to be one of the "third-order advantages". The proposed algorithm shows great potential as a promising alternative for the third-order calibration of a four-way data array by contrasting with four-way parallel factor analysis (four-way PARAFAC). Furthermore, both algorithms mentioned above were utilized to analyze the 5-hydroxytryptamine (serotonin) contents in plasma samples by obtaining four-way array (excitation-emission-pH-sample) data, and produced satisfactory results. The serotonin contents in plasma samples obtained by using four-way SWANRF and four-way PARAFAC were 0.324 ± 0.005 and 0.348 ± 0.006 nmol mL(-1), respectively.
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Directory of Open Access Journals (Sweden)
Byung Duk Song
2017-09-01
Full Text Available In the green manufacturing system that pursues the reuse of used products, the residual value of collected used products (CUP hugely affects a variety of managerial decisions to construct profitable and environmental remanufacturing plans. This paper deals with a closed-loop green manufacturing system for companies which perform both manufacturing with raw materials and remanufacturing with collected used products (CUP. The amount of CUP is assumed as a function of buy-back cost while the quality level of CUP, which means the residual value, follows a known distribution. In addition, the remanufacturing cost can differ according to the quality of the CUP. Moreover, nowadays companies are subject to existing environment-related laws such as Extended Producer Responsibility (EPR. Therefore, a company should collect more used products than its obligatory take-back quota or face fines from the government for not meeting its quota. Through the development of mathematical models, two kinds of inspection policies are examined to validate the efficiency of two different operation processes. To find a managerial solution, a genetic algorithm is proposed and tested with numerical examples.
International Nuclear Information System (INIS)
Wang Lincong; Donald, Bruce Randall
2004-01-01
We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics
Residual deposits (residual soil)
International Nuclear Information System (INIS)
Khasanov, A.Kh.
1988-01-01
Residual soil deposits is accumulation of new formate ore minerals on the earth surface, arise as a result of chemical decomposition of rocks. As is well known, at the hyper genes zone under the influence of different factors (water, carbonic acid, organic acids, oxygen, microorganism activity) passes chemical weathering of rocks. Residual soil deposits forming depends from complex of geologic and climatic factors and also from composition and physical and chemical properties of initial rocks
Directory of Open Access Journals (Sweden)
Istadi Istadi
2012-04-01
Full Text Available The plastic waste utilization can be addressed toward different valuable products. A promising technology for the utilization is by converting it to fuels. Simultaneous modeling and optimization representing effect of reactor temperature, catalyst calcinations temperature, and plastic/catalyst weight ratio toward performance of liquid fuel production was studied over modified catalyst waste. The optimization was performed to find optimal operating conditions (reactor temperature, catalyst calcination temperature, and plastic/catalyst weight ratio that maximize the liquid fuel product. A Hybrid Artificial Neural Network-Genetic Algorithm (ANN-GA method was used for the modeling and optimization, respectively. The variable interaction between the reactor temperature, catalyst calcination temperature, as well as plastic/catalyst ratio is presented in surface plots. From the GC-MS characterization, the liquid fuels product was mainly composed of C4 to C13 hydrocarbons.KONVERSI LIMBAH PLASTIK MENJADI BAHAN BAKAR CAIR DENGAN METODE PERENGKAHAN KATALITIK MENGGUNAKAN KATALIS BEKAS YANG TERMODIFIKASI: PEMODELAN DAN OPTIMASI MENGGUNAKAN GABUNGAN METODE ARTIFICIAL NEURAL NETWORK DAN GENETIC ALGORITHM. Pemanfaatan limbah plastik dapat dilakukan untuk menghasilkan produk yang lebih bernilai tinggi. Salah satu teknologi yang menjanjikan adalah dengan mengkonversikannya menjadi bahan bakar. Permodelan, simulasi dan optimisasi simultan yang menggambarkan efek dari suhu reaktor, suhu kalsinasi katalis, dan rasio berat plastik/katalis terhadap kinerja produksi bahan bakar cair telah dipelajari menggunakan katalis bekas termodifikasi Optimisasi ini ditujukan untuk mencari kondisi operasi optimum (suhu reaktor, suhu kalsinasi katalis, dan rasio berat plastik/katalis yang memaksimalkan produk bahan bakar cair. Metode Hybrid Artificial Neural Network-Genetic Algorithm (ANN-GA telah digunakan untuk permodelan dan optimisasi simultan tersebut. Inetraksi antar variabel
International Nuclear Information System (INIS)
Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe
2011-01-01
This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)
Giordano, Pablo C; Beccaria, Alejandro J; Goicoechea, Héctor C
2011-11-01
A comparison between the classic Plackett-Burman design (PB) ANOVA analysis and a genetic algorithm (GA) approach to identify significant factors have been carried out. This comparison was made by applying both analyses to data obtained from the experimental results when optimizing both chemical and enzymatic hydrolysis of three lignocellulosic feedstocks (corn and wheat bran, and pine sawdust) by a PB experimental design. Depending on the kind of biomass and the hydrolysis being considered, different results were obtained. Interestingly, some interactions were found to be significant by the GA approach and allowed to identify significant factors, that otherwise, based only in the classic PB analysis, would have not been taken into account in a further optimization step. Improvements in the fitting of c.a. 80% were obtained when comparing the coefficient of determination (R2) computed for both methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Cioffi, F.; Hidalgo, J.I.; Fernández, R.; Pirling, T.; Fernández, B.; Gesto, D.; Puente Orench, I.; Rey, P.; González-Doncel, G.
2014-01-01
Procedures based on equilibrium conditions (stress and bending moment) have been used to obtain an unstressed lattice spacing, d 0 , as a crucial requirement for calculating the residual stress (RS) profile across a joint conducted on a 10 mm thick plate of age-hardenable AA2024 alloy by friction stir welding (FSW). Two procedures have been used that take advantage of neutron diffraction measurements. First, equilibrium conditions were imposed on sections parallel to the weld so that a constant d 0 value corresponding to the base material region could be calculated analytically. Second, balance conditions were imposed on a section transverse to the weld. Then, using the data and a genetic algorithm, suitable d 0 values for the different regions of the weld have been found. For several reasons, the comb method has proved to be inappropriate for RS determination in the case of age-hardenable alloys. However, the equilibrium conditions, together with the genetic algorithm, has been shown to be very suitable for determining RS profiles in FSW joints of these alloys, where inherent microstructural variations of d 0 across the weld are expected
Blyth, T S; Sneddon, I N; Stark, M
1972-01-01
Residuation Theory aims to contribute to literature in the field of ordered algebraic structures, especially on the subject of residual mappings. The book is divided into three chapters. Chapter 1 focuses on ordered sets; directed sets; semilattices; lattices; and complete lattices. Chapter 2 tackles Baer rings; Baer semigroups; Foulis semigroups; residual mappings; the notion of involution; and Boolean algebras. Chapter 3 covers residuated groupoids and semigroups; group homomorphic and isotone homomorphic Boolean images of ordered semigroups; Dubreil-Jacotin and Brouwer semigroups; and loli
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Energy Technology Data Exchange (ETDEWEB)
Gieg, W.; Rank, V.
1942-10-15
In the first stage of coal hydrogenation, the liquid phase, light and heavy oils were produced; the latter containing the nonliquefied parts of the coal, the coal ash, and the catalyst substances. It was the problem of residue processing to extract from these so-called let-down oils that which could be used as pasting oils for the coal. The object was to obtain a maximum oil extraction and a complete removal of the solids, because of the latter were returned to the process they would needlessly burden the reaction space. Separation of solids in residue processing could be accomplished by filtration, centrifugation, extraction, distillation, or low-temperature carbonization (L.T.C.). Filtration or centrifugation was most suitable since a maximum oil yield could be expected from it, since only a small portion of the let-down oil contained in the filtration or centrifugation residue had to be thermally treated. The most satisfactory centrifuge at this time was the Laval, which delivered liquid centrifuge residue and centrifuge oil continuously. By comparison, the semi-continuous centrifuges delivered plastic residues which were difficult to handle. Various apparatus such as the spiral screw kiln and the ball kiln were used for low-temperature carbonization of centrifuge residues. Both were based on the idea of carbonization in thin layers. Efforts were also being made to produce electrode carbon and briquette binder as by-products of the liquid coal phase.
African Journals Online (AJOL)
ing the residual risk of transmission of HIV by blood transfusion. An epidemiological approach assumed that all HIV infections detected serologically in first-time donors were pre-existing or prevalent infections, and that all infections detected in repeat blood donors were new or incident infections. During 1986 - 1987,0,012%.
International Nuclear Information System (INIS)
D'Elboux, C.V.; Paiva, I.B.
1980-01-01
Exploration for uranium carried out over a major portion of the Rio Grande do Sul Shield has revealed a number of small residual basins developed along glacially eroded channels of pre-Permian age. Mineralization of uranium occurs in two distinct sedimentary units. The lower unit consists of rhythmites overlain by a sequence of black shales, siltstones and coal seams, while the upper one is dominated by sandstones of probable fluvial origin. (Author) [pt
Fast autodidactic adaptive equalization algorithms
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
RESIDUAL RISK ASSESSMENTS - RESIDUAL RISK ...
This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Coke Ovens. These assesments utilize existing models and data bases to examine the multi-media and multi-pollutant impacts of air toxics emissions on human health and the environment. Details on the assessment process and methodologies can be found in EPA's Residual Risk Report to Congress issued in March of 1999 (see web site). To assess the health risks imposed by air toxics emissions from Coke Ovens to determine if control technology standards previously established are adequately protecting public health.
Research on wind field algorithm of wind lidar based on BP neural network and grey prediction
Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei
2018-01-01
This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.
Residual nilpotence and residual solubility of groups
International Nuclear Information System (INIS)
Mikhailov, R V
2005-01-01
The properties of the residual nilpotence and the residual solubility of groups are studied. The main objects under investigation are the class of residually nilpotent groups such that each central extension of these groups is also residually nilpotent and the class of residually soluble groups such that each Abelian extension of these groups is residually soluble. Various examples of groups not belonging to these classes are constructed by homological methods and methods of the theory of modules over group rings. Several applications of the theory under consideration are presented and problems concerning the residual nilpotence of one-relator groups are considered.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
International Nuclear Information System (INIS)
Moore, Peter K.
2003-01-01
Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied
A new implementation of the CMRH method for solving dense linear systems
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.
Energy Technology Data Exchange (ETDEWEB)
Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-09-29
The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Energy Technology Data Exchange (ETDEWEB)
Grefenstette, J.J.
1994-12-31
Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.
Active noise cancellation algorithms for impulsive noise.
Li, Peng; Yu, Xun
2013-04-01
Impulsive noise is an important challenge for the practical implementation of active noise control (ANC) systems. The advantages and disadvantages of popular filtered- X least mean square (FXLMS) ANC algorithm and nonlinear filtered-X least mean M-estimate (FXLMM) algorithm are discussed in this paper. A new modified FXLMM algorithm is also proposed to achieve better performance in controlling impulsive noise. Computer simulations and experiments are carried out for all three algorithms and the results are presented and analyzed. The results show that the FXLMM and modified FXLMM algorithms are more robust in suppressing the adverse effect of sudden large amplitude impulses than FXLMS algorithm, and in particular, the proposed modified FXLMM algorithm can achieve better stability without sacrificing the performance of residual noise when encountering impulses.
International Nuclear Information System (INIS)
Berecz, I.
1982-01-01
Determination of the residual gas composition in vacuum systems by a special mass spectrometric method was presented. The quadrupole mass spectrometer (QMS) and its application in thin film technology was discussed. Results, partial pressure versus time curves as well as the line spectra of the residual gases in case of the vaporization of a Ti-Pd-Au alloy were demonstrated together with the possible construction schemes of QMS residual gas analysers. (Sz.J.)
García, M D Gil; Culzoni, M J; De Zan, M M; Valverde, R Santiago; Galera, M Martínez; Goicoechea, H C
2008-02-01
A new powerful algorithm (unfolded-partial least squares followed by residual bilinearization (U-PLS/RBL)) was applied for first time on second-order liquid chromatography with diode array detection (LC-DAD) data and compared with a well-known established method (multivariate curve resolution-alternating least squares (MCR-ALS)) for the simultaneous determination of eight tetracyclines (tetracycline, oxytetracycline, meclocycline, minocycline, metacycline, chlortetracycline, demeclocycline and doxycycline) in wastewaters. Tetracyclines were pre-concentrated using Oasis Max C18 cartridges and then separated on a Thermo Aquasil C18 (150 mm x 4.6mm, 5 microm) column. The whole method was validated using Milli-Q water samples and both univariate and multivariate analytical figures of merit were obtained. Additionally, two data pre-treatment were applied (baseline correction and piecewise direct standardization), which allowed to correct the effect of breakthrough and to reduce the total interferences retained after pre-concentration of wastewaters. The results showed that the eight tetracycline antibiotics can be successfully determined in wastewaters, the drawbacks due to matrix interferences being adequately handled and overcome by using U-PSL/RBL.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Agricultural pesticide residues
International Nuclear Information System (INIS)
Fuehr, F.
1984-01-01
The utilization of tracer techniques in the study of agricultural pesticide residues is reviewed under the following headings: lysimeter experiments, micro-ecosystems, translocation in soil, degradation of pesticides in soil, biological availability of soil-applied substances, bound residues in the soil, use of macro- and microautography, double and triple labelling, use of tracer labelling in animal experiments. (U.K.)
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
International Nuclear Information System (INIS)
Medina Bermudez, Clara Ines
1999-01-01
The topic of solid residues is specifically of great interest and concern for the authorities, institutions and community that identify in them a true threat against the human health and the atmosphere in the related with the aesthetic deterioration of the urban centers and of the natural landscape; in the proliferation of vectorial transmitters of illnesses and the effect on the biodiversity. Inside the wide spectrum of topics that they keep relationship with the environmental protection, the inadequate handling of solid residues and residues dangerous squatter an important line in the definition of political and practical environmentally sustainable. The industrial development and the population's growth have originated a continuous increase in the production of solid residues; of equal it forms, their composition day after day is more heterogeneous. The base for the good handling includes the appropriate intervention of the different stages of an integral administration of residues, which include the separation in the source, the gathering, the handling, the use, treatment, final disposition and the institutional organization of the administration. The topic of the dangerous residues generates more expectation. These residues understand from those of pathogen type that are generated in the establishments of health that of hospital attention, until those of combustible, inflammable type, explosive, radio-active, volatile, corrosive, reagent or toxic, associated to numerous industrial processes, common in our countries in development
[Residual neuromuscular blockade].
Fuchs-Buder, T; Schmartz, D
2017-06-01
Even small degrees of residual neuromuscular blockade, i. e. a train-of-four (TOF) ratio >0.6, may lead to clinically relevant consequences for the patient. Especially upper airway integrity and the ability to swallow may still be markedly impaired. Moreover, increasing evidence suggests that residual neuromuscular blockade may affect postoperative outcome of patients. The incidence of these small degrees of residual blockade is relatively high and may persist for more than 90 min after a single intubating dose of an intermediately acting neuromuscular blocking agent, such as rocuronium and atracurium. Both neuromuscular monitoring and pharmacological reversal are key elements for the prevention of postoperative residual blockade.
TENORM: Wastewater Treatment Residuals
Water and wastes which have been discharged into municipal sewers are treated at wastewater treatment plants. These may contain trace amounts of both man-made and naturally occurring radionuclides which can accumulate in the treatment plant and residuals.
Residuation in orthomodular lattices
Directory of Open Access Journals (Sweden)
Chajda Ivan
2017-04-01
Full Text Available We show that every idempotent weakly divisible residuated lattice satisfying the double negation law can be transformed into an orthomodular lattice. The converse holds if adjointness is replaced by conditional adjointness. Moreover, we show that every positive right residuated lattice satisfying the double negation law and two further simple identities can be converted into an orthomodular lattice. In this case, also the converse statement is true and the corresponence is nearly one-to-one.
Characterization of Hospital Residuals
International Nuclear Information System (INIS)
Blanco Meza, A.; Bonilla Jimenez, S.
1997-01-01
The main objective of this investigation is the characterization of the solid residuals. A description of the handling of the liquid and gassy waste generated in hospitals is also given, identifying the source where they originate. To achieve the proposed objective the work was divided in three stages: The first one was the planning and the coordination with each hospital center, in this way, to determine the schedule of gathering of the waste can be possible. In the second stage a fieldwork was made; it consisted in gathering the quantitative and qualitative information of the general state of the handling of residuals. In the third and last stage, the information previously obtained was organized to express the results as the production rate per day by bed, generation of solid residuals for sampled services, type of solid residuals and density of the same ones. With the obtained results, approaches are settled down to either determine design parameters for final disposition whether for incineration, trituration, sanitary filler or recycling of some materials, and storage politics of the solid residuals that allow to determine the gathering frequency. The study concludes that it is necessary to improve the conditions of the residuals handling in some aspects, to provide the cleaning personnel of the equipment for gathering disposition and of security, minimum to carry out this work efficiently, and to maintain a control of all the dangerous waste, like sharp or polluted materials. In this way, an appreciable reduction is guaranteed in the impact on the atmosphere. (Author) [es
International Nuclear Information System (INIS)
Hwang, F-N; Wei, Z-H; Huang, T-M; Wang Weichung
2010-01-01
We develop a parallel Jacobi-Davidson approach for finding a partial set of eigenpairs of large sparse polynomial eigenvalue problems with application in quantum dot simulation. A Jacobi-Davidson eigenvalue solver is implemented based on the Portable, Extensible Toolkit for Scientific Computation (PETSc). The eigensolver thus inherits PETSc's efficient and various parallel operations, linear solvers, preconditioning schemes, and easy usages. The parallel eigenvalue solver is then used to solve higher degree polynomial eigenvalue problems arising in numerical simulations of three dimensional quantum dots governed by Schroedinger's equations. We find that the parallel restricted additive Schwarz preconditioner in conjunction with a parallel Krylov subspace method (e.g. GMRES) can solve the correction equations, the most costly step in the Jacobi-Davidson algorithm, very efficiently in parallel. Besides, the overall performance is quite satisfactory. We have observed near perfect superlinear speedup by using up to 320 processors. The parallel eigensolver can find all target interior eigenpairs of a quintic polynomial eigenvalue problem with more than 32 million variables within 12 minutes by using 272 Intel 3.0 GHz processors.
Algorithms for unweighted least-squares factor analysis
Krijnen, WP
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed, MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values.
International Nuclear Information System (INIS)
2013-06-01
The IAEA attaches great importance to the dissemination of information that can assist Member States in the development, implementation, maintenance and continuous improvement of systems, programmes and activities that support the nuclear fuel cycle and nuclear applications, and that address the legacy of past practices and accidents. However, radioactive residues are found not only in nuclear fuel cycle activities, but also in a range of other industrial activities, including: - Mining and milling of metalliferous and non-metallic ores; - Production of non-nuclear fuels, including coal, oil and gas; - Extraction and purification of water (e.g. in the generation of geothermal energy, as drinking and industrial process water; in paper and pulp manufacturing processes); - Production of industrial minerals, including phosphate, clay and building materials; - Use of radionuclides, such as thorium, for properties other than their radioactivity. Naturally occurring radioactive material (NORM) may lead to exposures at some stage of these processes and in the use or reuse of products, residues or wastes. Several IAEA publications address NORM issues with a special focus on some of the more relevant industrial operations. This publication attempts to provide guidance on managing residues arising from different NORM type industries, and on pertinent residue management strategies and technologies, to help Member States gain perspectives on the management of NORM residues
Energy Technology Data Exchange (ETDEWEB)
Ezeilo, A.N.; Webster, G.A. [Imperial College, London (United Kingdom); Webster, P.J. [Salford Univ. (United Kingdom)
1997-04-01
Because neutrons can penetrate distances of up to 50 mm in most engineering materials, this makes them unique for establishing residual-stress distributions non-destructively. D1A is particularly suited for through-surface measurements as it does not suffer from instrumental surface aberrations commonly found on multidetector instruments, while D20 is best for fast internal-strain scanning. Two examples for residual-stress measurements in a shot-peened material, and in a weld are presented to demonstrate the attractive features of both instruments. (author).
Composition of carbonization residues
Energy Technology Data Exchange (ETDEWEB)
Hupfer; Leonhardt
1943-11-27
This report compared the composition of samples from Wesseling and Leuna. In each case the sample was a residue from carbonization of the residues from hydrogenation of the brown coal processed at the plant. The composition was given in terms of volatile components, fixed carbon, ash, water, carbon, hydrogen, oxygen, nitrogen, volatile sulfur, and total sulfur. The result of carbonization was given in terms of (ash and) coke, tar, water, gas and losses, and bitumen. The composition of the ash was given in terms of silicon dioxide, ferric oxide, aluminum oxide, calcium oxide, magnesium oxide, potassium and sodium oxides, sulfur trioxide, phosphorus pentoxide, chlorine, and titanium oxide. The most important difference between the properties of the two samples was that the residue from Wesseling only contained 4% oil, whereas that from Leuna had about 26% oil. Taking into account the total amount of residue processed yearly, the report noted that better carbonization at Leuna could save 20,000 metric tons/year of oil. Some other comparisons of data included about 33% volatiles at Leuna vs. about 22% at Wesseling, about 5 1/2% sulfur at Leuna vs. about 6 1/2% at Leuna, but about 57% ash for both. Composition of the ash differed quite a bit between the two. 1 table.
Designing with residual materials
Walhout, W.; Wever, R.; Blom, E.; Addink-Dölle, L.; Tempelman, E.
2013-01-01
Many entrepreneurial businesses have attempted to create value based on the residual material streams of third parties. Based on ‘waste’ materials they designed products, around which they built their company. Such activities have the potential to yield sustainable products. Many of such companies
Identification of residue pairing in interacting β-strands from a predicted residue contact map.
Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng
2018-04-19
Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .
Harmonic Components Based Post-Filter Design for Residual Echo Suppression
Lee, Minwoo; Lee, Yoonjae; Kim, Kihyeon; Ko, Hanseok
In this Letter, a residual acoustic echo suppression method is proposed to enhance the speech quality of hands-free communication in an automobile environment. The echo signal is normally a human voice with harmonic characteristics in a hands-free communication environment. The proposed algorithm estimates the residual echo signal by emphasizing its harmonic components. The estimated residual echo is used to obtain the signal-to-interference ratio (SIR) information at the acoustic echo canceller output. Then, the SIR based Wiener post-filter is constructed to reduce both the residual echo and noise. The experimental results confirm that the proposed algorithm is superior to the conventional residual echo suppression algorithm in terms of the echo return loss enhancement (ERLE) and the segmental signal-to-noise ratio (SEGSNR).
Residual signal feature extraction for gearbox planetary stage fault detection
DEFF Research Database (Denmark)
Skrimpas, Georgios Alexandros; Ursin, Thomas; Sweeney, Christian Walsted
2017-01-01
, statistical features measuring the signal energy and Gaussianity are calculated from the residual signals between each pair from the first to the fifth tooth mesh frequency of the meshing process in a multi-stage wind turbine gearbox. The suggested algorithm includes resampling from time to angular domain...
A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems
Directory of Open Access Journals (Sweden)
William La Cruz
2014-05-01
Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.
Residual stresses in material processing
Kozaczek, K. J.; Watkins, T. R.; Hubbard, C. R.; Wang, Xun-Li; Spooner, S.
Material manufacturing processes often introduce residual stresses into the product. The residual stresses affect the properties of the material and often are detrimental. Therefore, the distribution and magnitude of residual stresses in the final product are usually an important factor in manufacturing process optimization or component life prediction. The present paper briefly discusses the causes of residual stresses. It then addresses the direct, nondestructive methods of residual stress measurement by X ray and neutron diffraction. Examples are presented to demonstrate the importance of residual stress measurement in machining and joining operations.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....
Tewari, Krishna C.; Foster, Edward P.
1985-01-01
Coal solids (SRC) and distillate oils are combined to afford single-phase blends of residual oils which have utility as fuel oils substitutes. The components are combined on the basis of their respective polarities, that is, on the basis of their heteroatom content, to assure complete solubilization of SRC. The resulting composition is a fuel oil blend which retains its stability and homogeneity over the long term.
Composition of carbonization residues
Energy Technology Data Exchange (ETDEWEB)
Hupfer; Leonhardt
1943-11-30
This report gave a record of the composition of several samples of residues from carbonization of various hydrogenation residue from processing some type of coal or tar in the Bergius process. These included Silesian bituminous coal processed at 600 atm. with iron catalyst, in one case to produce gasoline and middle oil and in another case to produce heavy oil excess, Scholven coal processed at 250 atm. with tin oxalate and chlorine catalyst, Bruex tar processed in a 10-liter oven using iron catalyst, and a pitch mixture from Welheim processed in a 10-liter over using iron catalyst. The values gathered were compared with a few corresponding values estimated for Boehlen tar and Gelsenberg coal based on several assumptions outlined in the report. The data recorded included percentage of ash in the dry residue and percentage of carbon, hydrogen, oxygen, nitrogen, chlorine, total sulfur, and volatile sulfur. The percentage of ash varied from 21.43% in the case of Bruex tar to 53.15% in the case of one of the Silesian coals. Percentage of carbon varied from 44.0% in the case of Scholven coal to 78.03% in the case of Bruex tar. Percentage of total sulfur varied from 2.28% for Bruex tar to a recorded 5.65% for one of the Silesian coals and an estimated 6% for Boehlen tar. 1 table.
Exploiting residual information in the parameter choice for discrete ill-posed problems
DEFF Research Database (Denmark)
Hansen, Per Christian; Kilmer, Misha E.; Kjeldsen, Rikke Høj
2006-01-01
Most algorithms for choosing the regularization parameter in a discrete ill-posed problem are based on the norm of the residual vector. In this work we propose a different approach, where we seek to use all the information available in the residual vector. We present important relations between...
Residue preference mapping of ligand fragments in the Protein Data Bank.
Wang, Lirong; Xie, Zhaojun; Wipf, Peter; Xie, Xiang-Qun
2011-04-25
The interaction between small molecules and proteins is one of the major concerns for structure-based drug design because the principles of protein-ligand interactions and molecular recognition are not thoroughly understood. Fortunately, the analysis of protein-ligand complexes in the Protein Data Bank (PDB) enables unprecedented possibilities for new insights. Herein, we applied molecule-fragmentation algorithms to split the ligands extracted from PDB crystal structures into small fragments. Subsequently, we have developed a ligand fragment and residue preference mapping (LigFrag-RPM) algorithm to map the profiles of the interactions between these fragments and the 20 proteinogenic amino acid residues. A total of 4032 fragments were generated from 71 798 PDB ligands by a ring cleavage (RC) algorithm. Among these ligand fragments, 315 unique fragments were characterized with the corresponding fragment-residue interaction profiles by counting residues close to these fragments. The interaction profiles revealed that these fragments have specific preferences for certain types of residues. The applications of these interaction profiles were also explored and evaluated in case studies, showing great potential for the study of protein-ligand interactions and drug design. Our studies demonstrated that the fragment-residue interaction profiles generated from the PDB ligand fragments can be used to detect whether these fragments are in their favorable or unfavorable environments. The algorithm for a ligand fragment and residue preference mapping (LigFrag-RPM) developed here also has the potential to guide lead chemistry modifications as well as binding residues predictions.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Deflation of eigenvalues for iterative methods in lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Darnell, Dean; Morgan, Ronald B.; Wilcox, Walter
2004-03-01
Work on generalizing the deflated, restarted GMRES algorithm, useful in lattice studies using stochastic noise methods, is reported. We first show how the multi-mass extension of deflated GMRES can be implemented. We then give a deflated GMRES method that can be used on multiple right-hand sides of A{chi} = b in an efficient manner. We also discuss and give numerical results on the possibilty of combining deflated GMRES for the first right hand side with a deflated BiCGStab algorithm for the subsequent right hand sides.
Deflation of eigenvalues for iterative methods in lattice QCD
International Nuclear Information System (INIS)
Darnell, Dean; Morgan, Ronald B.; Wilcox, Walter
2004-01-01
Work on generalizing the deflated, restarted GMRES algorithm, useful in lattice studies using stochastic noise methods, is reported. We first show how the multi-mass extension of deflated GMRES can be implemented. We then give a deflated GMRES method that can be used on multiple right-hand sides of Aχ = b in an efficient manner. We also discuss and give numerical results on the possibilty of combining deflated GMRES for the first right hand side with a deflated BiCGStab algorithm for the subsequent right hand sides
Quadratic residues and non-residues selected topics
Wright, Steve
2016-01-01
This book offers an account of the classical theory of quadratic residues and non-residues with the goal of using that theory as a lens through which to view the development of some of the fundamental methods employed in modern elementary, algebraic, and analytic number theory. The first three chapters present some basic facts and the history of quadratic residues and non-residues and discuss various proofs of the Law of Quadratic Reciprosity in depth, with an emphasis on the six proofs that Gauss published. The remaining seven chapters explore some interesting applications of the Law of Quadratic Reciprocity, prove some results concerning the distribution and arithmetic structure of quadratic residues and non-residues, provide a detailed proof of Dirichlet’s Class-Number Formula, and discuss the question of whether quadratic residues are randomly distributed. The text is a valuable resource for graduate and advanced undergraduate students as well as for mathematicians interested in number theory.
DEFF Research Database (Denmark)
Carbonara, Emanuela; Guerra, Alice; Parisi, Francesco
2016-01-01
Economic models of tort law evaluate the efficiency of liability rules in terms of care and activity levels. A liability regime is optimal when it creates incentives to maximize the value of risky activities net of accident and precaution costs. The allocation of primary and residual liability...... the virtues and limits of loss-sharing rules in generating optimal (second-best) incentives and allocations of risk. We find that loss sharing may be optimal in the presence of countervailing policy objectives, homogeneous risk avoiders, and subadditive risk, which potentially offers a valuable tool...
Energy Technology Data Exchange (ETDEWEB)
Jungersen, G. [Dansk Teknologisk Inst. (Denmark); Kivaisi, A.; Rubindamayugi, M. [Univ. of Dar es Salaam (Tanzania, United Republic of)
1998-05-01
The main objectives of this report are: To analyse the bioenergy potential of the Tanzanian agro-industries, with special emphasis on the Sisal industry, the largest producer of agro-industrial residues in Tanzania; and to upgrade the human capacity and research potential of the Applied Microbiology Unit at the University of Dar es Salaam, in order to ensure a scientific and technological support for future operation and implementation of biogas facilities and anaerobic water treatment systems. The experimental work on sisal residues contains the following issues: Optimal reactor set-up and performance; Pre-treatment methods for treatment of fibre fraction in order to increase the methane yield; Evaluation of the requirement for nutrient addition; Evaluation of the potential for bioethanol production from sisal bulbs. The processing of sisal leaves into dry fibres (decortication) has traditionally been done by the wet processing method, which consumes considerable quantities of water and produces large quantities of waste water. The Tanzania Sisal Authority (TSA) is now developing a dry decortication method, which consumes less water and produces a waste product with 12-15% TS, which is feasible for treatment in CSTR systems (Continously Stirred Tank Reactors). (EG)
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Digital Arithmetic: Division Algorithms
DEFF Research Database (Denmark)
Montuschi, Paolo; Nannarelli, Alberto
2017-01-01
implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...
Warnock, April M.; Hagen, Scott C.; Passeri, Davina L.
2015-01-01
Marine tar residues originate from natural and anthropogenic oil releases into the ocean environment and are formed after liquid petroleum is transformed by weathering, sedimentation, and other processes. Tar balls, tar mats, and tar patties are common examples of marine tar residues and can range in size from millimeters in diameter (tar balls) to several meters in length and width (tar mats). These residues can remain in the ocean environment indefinitely, decomposing or becoming buried in ...
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
ResBoost: characterizing and predicting catalytic residues in enzymes
Directory of Open Access Journals (Sweden)
Freund Yoav
2009-06-01
Full Text Available Abstract Background Identifying the catalytic residues in enzymes can aid in understanding the molecular basis of an enzyme's function and has significant implications for designing new drugs, identifying genetic disorders, and engineering proteins with novel functions. Since experimentally determining catalytic sites is expensive, better computational methods for identifying catalytic residues are needed. Results We propose ResBoost, a new computational method to learn characteristics of catalytic residues. The method effectively selects and combines rules of thumb into a simple, easily interpretable logical expression that can be used for prediction. We formally define the rules of thumb that are often used to narrow the list of candidate residues, including residue evolutionary conservation, 3D clustering, solvent accessibility, and hydrophilicity. ResBoost builds on two methods from machine learning, the AdaBoost algorithm and Alternating Decision Trees, and provides precise control over the inherent trade-off between sensitivity and specificity. We evaluated ResBoost using cross-validation on a dataset of 100 enzymes from the hand-curated Catalytic Site Atlas (CSA. Conclusion ResBoost achieved 85% sensitivity for a 9.8% false positive rate and 73% sensitivity for a 5.7% false positive rate. ResBoost reduces the number of false positives by up to 56% compared to the use of evolutionary conservation scoring alone. We also illustrate the ability of ResBoost to identify recently validated catalytic residues not listed in the CSA.
Evaluation of residue-residue contact predictions in CASP9
Monastyrskyy, Bohdan
2011-01-01
This work presents the results of the assessment of the intramolecular residue-residue contact predictions submitted to CASP9. The methodology for the assessment does not differ from that used in previous CASPs, with two basic evaluation measures being the precision in recognizing contacts and the difference between the distribution of distances in the subset of predicted contact pairs versus all pairs of residues in the structure. The emphasis is placed on the prediction of long-range contacts (i.e., contacts between residues separated by at least 24 residues along sequence) in target proteins that cannot be easily modeled by homology. Although there is considerable activity in the field, the current analysis reports no discernable progress since CASP8.
2D-RBUC for efficient parallel compression of residuals
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Lifetime and residual strength of materials
DEFF Research Database (Denmark)
Nielsen, Lauge Fuglsang
1997-01-01
The DVM-theory (Damaged Viscoelastic Material) previously developed by the author to predict lifetime of wood subjected to static loads is further developed in this paper such that harmonic load variations can also be considered. Lifetime (real time or number of cycles) is predicted as a function...... of load amplitude, load average, fractional time under maximum load, and load frequency.The analysis includes prediction of residual strength (re-cycle strength) during the process of load cycling. It is concluded that number of cycles to failure is a very poor design criterion. It is demonstrated how...... the theory developed can be generalized also to consider non-harmonic load variations.Algorithms are presented for design purposes which may be suggested as qualified alternatives to the Palmgren-Miner's methods normally used in fatigue analysis of materials under arbitrary load variations. Prediction...
Landfilling of waste incineration residues
DEFF Research Database (Denmark)
Christensen, Thomas Højlund; Astrup, Thomas; Cai, Zuansi
2002-01-01
Residues from waste incineration are bottom ashes and air-pollution-control (APC) residues including fly ashes. The leaching of heavy metals and salts from the ashes is substantial and a wide spectrum of leaching tests and corresponding criteria have been introduced to regulate the landfilling...
A method to evaluate residual phase error for polar formatted synthetic aperture radar systems
Musgrove, Cameron; Naething, Richard
2013-05-01
Synthetic aperture radar systems that use the polar format algorithm are subject to a focused scene size limit inherent to the polar format algorithm. The classic focused scene size limit is determined from the dominant residual range phase error term. Given the many sources of phase error in a synthetic aperture radar, a system designer is interested in how much phase error results from the assumptions made with the polar format algorithm. Autofocus algorithms have limits to the amount and type of phase error that can be corrected. Current methods correct only one or a few terms of the residual phase error. A system designer needs to be able to evaluate the contribution of the residual or uncorrected phase error terms to determine the new focused scene size limit. This paper describes a method to estimate the complete residual phase error, not just one or a few of the dominant residual terms. This method is demonstrated with polar format image formation, but is equally applicable to other image formation algorithms. A benefit for the system designer is that additional correction terms can be added or deleted from the analysis as necessary to evaluate the resulting effect upon image quality.
Directory of Open Access Journals (Sweden)
David Siegel
2011-01-01
Full Text Available This paper presents a health assessment methodology, as well as specific residual processing and figure of merit algorithms for anemometers in two different configurations. The methodology and algorithms are applied to data sets provided by the Prognostics and Health Management Society 2011 Data Challenge. The two configurations consist of the “paired” data set in which two anemometers are positioned at the same height, and the “shear” data set which includes an array of anemometers at different heights. Various wind speed statistics, wind direction, and ambient temperature information are provided, in which the objective is to classify the anemometer health status during a set of samples from a 5 day period. The proposed health assessment methodology consists of a set of data processing steps that include: data filtering and pre-processing, a residual or difference calculation, and a k-means clustering based figure of merit calculation. The residual processing for the paired data set was performed using a straightforward difference calculation, while the shear data set utilized an additional set of algorithm processing steps to calculate a weighted residual value for each anemometer. The residual processing algorithm for the shear data set used a set of auto-associative neural network models to learn the underlying correlation relationship between the anemometer sensors and to calculate a weighted residual value for each of the anemometer wind speed measurements. A figure of merit value based on the mean value of the smaller of the two clusters for the wind speed residual is used to determine the health status of each anemometer. Overall, the proposed methodology and algorithms show promise, in that the results from this approach resulted in the top score for the PHM 2011 Data Challenge Competition. Using different clustering algorithms or density estimation methods for the figure of merit calculation is being considered for future work.
Statistical inference on residual life
Jeong, Jong-Hyeon
2014-01-01
This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.
Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Automatic prediction of catalytic residues by modeling residue structural neighborhood
Directory of Open Access Journals (Sweden)
Passerini Andrea
2010-03-01
Full Text Available Abstract Background Prediction of catalytic residues is a major step in characterizing the function of enzymes. In its simpler formulation, the problem can be cast into a binary classification task at the residue level, by predicting whether the residue is directly involved in the catalytic process. The task is quite hard also when structural information is available, due to the rather wide range of roles a functional residue can play and to the large imbalance between the number of catalytic and non-catalytic residues. Results We developed an effective representation of structural information by modeling spherical regions around candidate residues, and extracting statistics on the properties of their content such as physico-chemical properties, atomic density, flexibility, presence of water molecules. We trained an SVM classifier combining our features with sequence-based information and previously developed 3D features, and compared its performance with the most recent state-of-the-art approaches on different benchmark datasets. We further analyzed the discriminant power of the information provided by the presence of heterogens in the residue neighborhood. Conclusions Our structure-based method achieves consistent improvements on all tested datasets over both sequence-based and structure-based state-of-the-art approaches. Structural neighborhood information is shown to be responsible for such results, and predicting the presence of nearby heterogens seems to be a promising direction for further improvements.
Clustering-driven residue filter for profile measurement system.
Jiang, Jun; Cheng, Jun; Zhou, Ying; Chen, Guang
2011-02-01
The profile measurement system is widely used in industrial quality control, and phase unwrapping (PU) is a key technique. An algorithm-driven PU is often used to reduce the impact of noise-induced residues to retrieve the most reliable solution. However, measuring speed is lowered due to the searching of optimal integration paths or correcting of phase gradients. From the viewpoint of the rapidity of the system, this paper characterizes the noise-induced residues, and it proposes a clustering-driven residue filter based on a set of directional windows. The proposed procedure makes the wrapped phases included in the filtering window have more similar values, and it groups the correct and noisy phases into individual clusters along the local fringe direction adaptively. It is effective for the tightly packed fringes, and it converts the algorithm-driven PU to the residue-filtering-driven one. This improves the operating speed of the 3D reconstruction significantly. The tests performed on simulated and real projected fringes confirm the validity of our approach.
Development of a General Modelling Methodology for Vacuum Residue Hydroconversion
Directory of Open Access Journals (Sweden)
Pereira de Oliveira L.
2013-11-01
Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Finite element solution algorithm for incompressible fluid dynamics
Baker, A. J.
1974-01-01
A finite element solution algorithm is established for the two-dimensional Navier-Stokes equations governing the transient motion of a viscous incompressible fluid, i.e., hydrodynamics. Dependent variable transformation renders the differential equation description uniformly elliptic. The finite element algorithm is established using the Galerkin criterion on a local basis within the Method of Weighted Residuals. It is unconstrained with respect to system linearity, computational mesh uniformity or solution domain closure regularity. The finite element matrices are established using a linear 'natural coordinate function' description. Computational solutions using the COMOC computer program illustrate the various features of the algorithm including recirculating flows.
A critical analysis of computational protein design with sparse residue interaction graphs.
Jain, Swati; Jou, Jonathan D; Georgiev, Ivelin S; Donald, Bruce R
2017-03-01
Protein design algorithms enumerate a combinatorial number of candidate structures to compute the Global Minimum Energy Conformation (GMEC). To efficiently find the GMEC, protein design algorithms must methodically reduce the conformational search space. By applying distance and energy cutoffs, the protein system to be designed can thus be represented using a sparse residue interaction graph, where the number of interacting residue pairs is less than all pairs of mutable residues, and the corresponding GMEC is called the sparse GMEC. However, ignoring some pairwise residue interactions can lead to a change in the energy, conformation, or sequence of the sparse GMEC vs. the original or the full GMEC. Despite the widespread use of sparse residue interaction graphs in protein design, the above mentioned effects of their use have not been previously analyzed. To analyze the costs and benefits of designing with sparse residue interaction graphs, we computed the GMECs for 136 different protein design problems both with and without distance and energy cutoffs, and compared their energies, conformations, and sequences. Our analysis shows that the differences between the GMECs depend critically on whether or not the design includes core, boundary, or surface residues. Moreover, neglecting long-range interactions can alter local interactions and introduce large sequence differences, both of which can result in significant structural and functional changes. Designs on proteins with experimentally measured thermostability show it is beneficial to compute both the full and the sparse GMEC accurately and efficiently. To this end, we show that a provable, ensemble-based algorithm can efficiently compute both GMECs by enumerating a small number of conformations, usually fewer than 1000. This provides a novel way to combine sparse residue interaction graphs with provable, ensemble-based algorithms to reap the benefits of sparse residue interaction graphs while avoiding their
Spectrum aware fuzzy clustering algorithm for cognative radio ...
African Journals Online (AJOL)
This paper proposes a SAFCA for a self-organized CH selection within a CRSN. The algorithm caters CR and WSN constraints by exploiting the dynamic spectrum access and fuzzy inference technique for an energy efficient CRSN. It utilizes channel availability and fuzzy parameters of residual energy, communication cost ...
Residual stress by repair welds
International Nuclear Information System (INIS)
Mochizuki, Masahito; Toyoda, Masao
2003-01-01
Residual stress by repair welds is computed using the thermal elastic-plastic analysis with phase-transformation effect. Coupling phenomena of temperature, microstructure, and stress-strain fields are simulated in the finite-element analysis. Weld bond of a plate butt-welded joint is gouged and then deposited by weld metal in repair process. Heat source is synchronously moved with the deposition of the finite-element as the weld deposition. Microstructure is considered by using CCT diagram and the transformation behavior in the repair weld is also simulated. The effects of initial stress, heat input, and weld length on residual stress distribution are studied from the organic results of numerical analysis. Initial residual stress before repair weld has no influence on the residual stress after repair treatment near weld metal, because the initial stress near weld metal releases due to high temperature of repair weld and then stress by repair weld regenerates. Heat input has an effect for residual stress distribution, for not its magnitude but distribution zone. Weld length should be considered reducing the magnitude of residual stress in the edge of weld bead; short bead induces high tensile residual stress. (author)
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
Indian Academy of Sciences (India)
Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...
Introduction to Algorithms -14 ...
Indian Academy of Sciences (India)
As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...
RESIDUAL RISK ASSESSMENT: ETHYLENE OXIDE ...
This document describes the residual risk assessment for the Ethylene Oxide Commercial Sterilization source category. For stationary sources, section 112 (f) of the Clean Air Act requires EPA to assess risks to human health and the environment following implementation of technology-based control standards. If these technology-based control standards do not provide an ample margin of safety, then EPA is required to promulgate addtional standards. This document describes the methodology and results of the residual risk assessment performed for the Ethylene Oxide Commercial Sterilization source category. The results of this analyiss will assist EPA in determining whether a residual risk rule for this source category is appropriate.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
Prediction and Optimization of Residual Stresses on Machined Surface and Sub-Surface in MQL Turning
Ji, Xia; Zou, Pan; Li, Beizhi; Rajora, Manik; Shao, Yamin; Liang, Steven Y.
Residual stress in the machined surface and subsurface is affected by materials, machining conditions, and tool geometry and can affect the component life and service quality significantly. Empirical or numerical experiments are commonly used for determining residual stresses but these are very expensive. There has been an increase in the utilization of minimum quantity lubrication (MQL) in recent years in order to reduce the cost and tool/part handling efforts, while its effect on machined part residual stress, although important, has not been explored. This paper presents a hybrid neural network that is trained using Simulated Annealing (SA) and Levenberg-Marquardt Algorithm (LM) in order to predict the values of residual stresses in cutting and radial direction on the surface and within the work piece after the MQL face turning process. Once the ANN has successfully been trained, an optimization procedure, using Genetic Algorithm (GA), is applied in order to find the best cutting conditions in order to minimize the surface tensile residual stresses and maximize the compressive residual stresses within the work piece. The optimization results show that the usage of MQL decreases the surface tensile residual stresses and increases the compressive residual stresses within the work piece.
International Nuclear Information System (INIS)
Godoy, William F.; Liu Xu
2012-01-01
The present study introduces a parallel Jacobian-free Newton Krylov (JFNK) general minimal residual (GMRES) solution for the discretized radiative transfer equation (RTE) in 3D, absorbing, emitting and scattering media. For the angular and spatial discretization of the RTE, the discrete ordinates method (DOM) and the finite volume method (FVM) including flux limiters are employed, respectively. Instead of forming and storing a large Jacobian matrix, JFNK methods allow for large memory savings as the required Jacobian-vector products are rather approximated by semiexact and numerical formulations, for which convergence and computational times are presented. Parallelization of the GMRES solution is introduced in a combined memory-shared/memory-distributed formulation that takes advantage of the fact that only large vector arrays remain in the JFNK process. Results are presented for 3D test cases including a simple homogeneous, isotropic medium and a more complex non-homogeneous, non-isothermal, absorbing–emitting and anisotropic scattering medium with collimated intensities. Additionally, convergence and stability of Gram–Schmidt and Householder orthogonalizations for the Arnoldi process in the parallel GMRES algorithms are discussed and analyzed. Overall, the introduction of JFNK methods results in a parallel, yet scalable to the tested 2048 processors, and memory affordable solution to 3D radiative transfer problems without compromising the accuracy and convergence of a Newton-like solution.
Nitrogen availability of biogas residues
Energy Technology Data Exchange (ETDEWEB)
El-Sayed Fouda, Sara
2011-09-07
The objectives of this study were to characterize biogas residues either unseparated or separated into a liquid and a solid phase from the fermentation of different substrates with respect to their N and C content. In addition, short and long term effects of the application of these biogas residues on the N availability and N utilization by ryegrass was investigated. It is concluded that unseparated or liquid separated biogas residues provide N at least corresponding to their ammonium content and that after the first fertilizer application the C{sub org}:N{sub org} ratio of the biogas residues was a crucial factor for the N availability. After long term application, the organic N accumulated in the soil leads to an increased release of N.
Residual stress analysis: a review
International Nuclear Information System (INIS)
Finlayson, T.R.
1983-01-01
The techniques which are or could be employed to measure residual stresses are outlined. They include X-ray and neutron diffraction. Comments are made on the reliability and accuracy to be expected from particular techniques
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Directory of Open Access Journals (Sweden)
Júlio C. U. Coelho
Full Text Available Our objective is to report three patients with recurrent severe upper abdominal pain secondary to residual gallbladder. All patients had been subjected to cholecystectomy from 1 to 20 years before. The diagnosis was established after several episodes of severe upper abdominal pain by imaging exams: ultrasonography, tomography, or endoscopic retrograde cholangiography. Removal of the residual gallbladder led to complete resolution of symptoms. Partial removal of the gallbladder is a very rare cause of postcholecystectomy symptoms.
Seismic noise attenuation using an online subspace tracking algorithm
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Marine Tar Residues: a Review.
Warnock, April M; Hagen, Scott C; Passeri, Davina L
Marine tar residues originate from natural and anthropogenic oil releases into the ocean environment and are formed after liquid petroleum is transformed by weathering, sedimentation, and other processes. Tar balls, tar mats, and tar patties are common examples of marine tar residues and can range in size from millimeters in diameter (tar balls) to several meters in length and width (tar mats). These residues can remain in the ocean environment indefinitely, decomposing or becoming buried in the sea floor. However, in many cases, they are transported ashore via currents and waves where they pose a concern to coastal recreation activities, the seafood industry and may have negative effects on wildlife. This review summarizes the current state of knowledge on marine tar residue formation, transport, degradation, and distribution. Methods of detection and removal of marine tar residues and their possible ecological effects are discussed, in addition to topics of marine tar research that warrant further investigation. Emphasis is placed on benthic tar residues, with a focus on the remnants of the Deepwater Horizon oil spill in particular, which are still affecting the northern Gulf of Mexico shores years after the leaking submarine well was capped.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
An improved algorithm for MFR fragment assembly
International Nuclear Information System (INIS)
Kontaxis, Georg
2012-01-01
A method for generating protein backbone models from backbone only NMR data is presented, which is based on molecular fragment replacement (MFR). In a first step, the PDB database is mined for homologous peptide fragments using experimental backbone-only data i.e. backbone chemical shifts (CS) and residual dipolar couplings (RDC). Second, this fragment library is refined against the experimental restraints. Finally, the fragments are assembled into a protein backbone fold using a rigid body docking algorithm using the RDCs as restraints. For improved performance, backbone nuclear Overhauser effects (NOEs) may be included at that stage. Compared to previous implementations of MFR-derived structure determination protocols this model-building algorithm offers improved stability and reliability. Furthermore, relative to CS-ROSETTA based methods, it provides faster performance and straightforward implementation with the option to easily include further types of restraints and additional energy terms.
An improved algorithm for MFR fragment assembly.
Kontaxis, Georg
2012-06-01
A method for generating protein backbone models from backbone only NMR data is presented, which is based on molecular fragment replacement (MFR). In a first step, the PDB database is mined for homologous peptide fragments using experimental backbone-only data i.e. backbone chemical shifts (CS) and residual dipolar couplings (RDC). Second, this fragment library is refined against the experimental restraints. Finally, the fragments are assembled into a protein backbone fold using a rigid body docking algorithm using the RDCs as restraints. For improved performance, backbone nuclear Overhauser effects (NOEs) may be included at that stage. Compared to previous implementations of MFR-derived structure determination protocols this model-building algorithm offers improved stability and reliability. Furthermore, relative to CS-ROSETTA based methods, it provides faster performance and straightforward implementation with the option to easily include further types of restraints and additional energy terms.
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.
2016-01-01
In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...
Evaluation of residue-residue contact prediction in CASP10
Monastyrskyy, Bohdan
2013-08-31
We present the results of the assessment of the intramolecular residue-residue contact predictions from 26 prediction groups participating in the 10th round of the CASP experiment. The most recently developed direct coupling analysis methods did not take part in the experiment likely because they require a very deep sequence alignment not available for any of the 114 CASP10 targets. The performance of contact prediction methods was evaluated with the measures used in previous CASPs (i.e., prediction accuracy and the difference between the distribution of the predicted contacts and that of all pairs of residues in the target protein), as well as new measures, such as the Matthews correlation coefficient, the area under the precision-recall curve and the ranks of the first correctly and incorrectly predicted contact. We also evaluated the ability to detect interdomain contacts and tested whether the difficulty of predicting contacts depends upon the protein length and the depth of the family sequence alignment. The analyses were carried out on the target domains for which structural homologs did not exist or were difficult to identify. The evaluation was performed for all types of contacts (short, medium, and long-range), with emphasis placed on long-range contacts, i.e. those involving residues separated by at least 24 residues along the sequence. The assessment suggests that the best CASP10 contact prediction methods perform at approximately the same level, and comparably to those participating in CASP9.
Principal curve algorithms for partitioning high-dimensional data spaces.
Zhang, Junping; Wang, Xiaodan; Kruger, Uwe; Wang, Fei-Yue
2011-03-01
Most partitioning algorithms iteratively partition a space into cells that contain underlying linear or nonlinear structures using linear partitioning strategies. The compactness of each cell depends on how well the (locally) linear partitioning strategy approximates the intrinsic structure. To partition a compact structure for complex data in a nonlinear context, this paper proposes a nonlinear partition strategy. This is a principal curve tree (PC-tree), which is implemented iteratively. Given that a PC passes through the middle of the data distribution, it allows for partitioning based on the arc length of the PC. To enhance the partitioning of a given space, a residual version of the PC-tree algorithm is developed, denoted here as the principal component analysis tree (PCR-tree) algorithm. Because of its residual property, the PCR-tree can yield the intrinsic dimension of high-dimensional data. Comparisons presented in this paper confirm that the proposed PC-tree and PCR-tree approaches show a better performance than several other competing partitioning algorithms in terms of vector quantization error and nearest neighbor search. The comparison also shows that the proposed algorithms outperform competing linear methods in total average coverage which measures the nonlinear compactness of partitioning algorithms.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Directory of Open Access Journals (Sweden)
Yongquan Zhou
2014-01-01
Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Static Analysis Numerical Algorithms
2016-04-01
STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Image Segmentation Algorithms Overview
Yuheng, Song; Hao, Yan
2017-01-01
The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Predicting the concentration of residual methanol in industrial formalin using machine learning
Heidkamp, William
2016-01-01
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to pr...
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Residual stresses around Vickers indents
International Nuclear Information System (INIS)
Pajares, A.; Guiberteau, F.; Steinbrech, R.W.
1995-01-01
The residual stresses generated by Vickers indentation in brittle materials and their changes due to annealing and surface removal were studied in 4 mol% yttria partially stabilized zirconia (4Y-PSZ). Three experimental methods to gain information about the residual stress field were applied: (i) crack profile measurements based on serial sectioning, (ii) controlled crack propagation in post indentation bending tests and (iii) double indentation tests with smaller secondary indents located around a larger primary impression. Three zones of different residual stress behavior are deduced from the experiments. Beneath the impression a crack free spherical zone of high hydrostatic stresses exists. This core zone is followed by a transition regime where indentation cracks develop but still experience hydrostatic stresses. Finally, in an outward third zone, the crack contour is entirely governed by the tensile residual stress intensity (elastically deformed region). Annealing and surface removal reduce this crack driving stress intensity. The specific changes of the residual stresses due to the post indentation treatments are described and discussed in detail for the three zones
Actinide recovery from pyrochemical residues
International Nuclear Information System (INIS)
Avens, L.R.; Clifton, D.G.; Vigil, A.R.
1984-01-01
A new process for recovery of plutonium and americium from pyrochemical waste has been demonstrated. It is based on chloride solution anion exchange at low acidity, which eliminates corrosive HCl fumes. Developmental experiments of the process flowsheet concentrated on molten salt extraction (MSE) residues and gave >95% plutonium and >90% americium recovery. The recovered plutonium contained 6 = from high chloride-low acid solution. Americium and other metals are washed from the ion exchange column with 1N HNO 3 -4.8M NaCl. The plutonium is recovered, after elution, via hydroxide precipitation, while the americium is recovered via NaHCO 3 precipitation. All filtrates from the process are discardable as low-level contaminated waste. Production-scale experiments are now in progress for MSE residues. Flow sheets for actinide recovery from electrorefining and direct oxide reduction residues are presented and discussed
Actinide recovery from pyrochemical residues
International Nuclear Information System (INIS)
Avens, L.R.; Clifton, D.G.; Vigil, A.R.
1985-05-01
We demonstrated a new process for recovering plutonium and americium from pyrochemical waste. The method is based on chloride solution anion exchange at low acidity, or acidity that eliminates corrosive HCl fumes. Developmental experiments of the process flow chart concentrated on molten salt extraction (MSE) residues and gave >95% plutonium and >90% americium recovery. The recovered plutonium contained 6 2- from high-chloride low-acid solution. Americium and other metals are washed from the ion exchange column with lN HNO 3 -4.8M NaCl. After elution, plutonium is recovered by hydroxide precipitation, and americium is recovered by NaHCO 3 precipitation. All filtrates from the process can be discardable as low-level contaminated waste. Production-scale experiments are in progress for MSE residues. Flow charts for actinide recovery from electro-refining and direct oxide reduction residues are presented and discussed
Alternatives to crop residues for soil amendment
Powell, J.M.; Unger, P.W.
1997-01-01
Metadata only record In semiarid agroecosystems, crop residues can provide important benefits of soil and water conservation, nutrient cycling, and improved subsequent crop yields. However, there are frequently multiple competing uses for residues, including animal forage, fuel, and construction material. This chapter discusses the various uses of crop residues and examines alternative soil amendments when crop residues cannot be left on the soil.
Leaching From Biomass Gasification Residues
DEFF Research Database (Denmark)
Allegrini, Elisa; Boldrin, Alessio; Polletini, A.
2011-01-01
The aim of the present work is to attain an overall characterization of solid residues from biomass gasification. Besides the determination of chemical and physical properties, the work was focused on the study of leaching behaviour. Compliance and pH-dependence leaching tests coupled with geoche......The aim of the present work is to attain an overall characterization of solid residues from biomass gasification. Besides the determination of chemical and physical properties, the work was focused on the study of leaching behaviour. Compliance and pH-dependence leaching tests coupled...
Carbaryl residues in maize products
International Nuclear Information System (INIS)
Zayed, S.M.A.D.; Mansour, S.A.; Mostafa, I.Y.; Hassan, A.
1976-01-01
The 14 C-labelled insecticide carbaryl was synthesized from [1- 14 C]-1-naphthol at a specific activity of 3.18mCig -1 . Maize plants were treated with the labelled insecticide under simulated conditions of agricultural practice. Mature plants were harvested and studied for distribution of total residues in untreated grains as popularly roasted and consumed, and in the corn oil and corn germ products. Total residues found under these conditions in the respective products were 0.2, 0.1, 0.45 and 0.16ppm. (author)
Combinatorial construction of toric residues
Khetan, Amit; Soprounov, Ivan
2004-01-01
The toric residue is a map depending on n+1 semi-ample divisors on a complete toric variety of dimension n. It appears in a variety of contexts such as sparse polynomial systems, mirror symmetry, and GKZ hypergeometric functions. In this paper we investigate the problem of finding an explicit element whose toric residue is equal to one. Such an element is shown to exist if and only if the associated polytopes are essential. We reduce the problem to finding a collection of partitions of the la...
Assessment of heterogeneity of residual variances using changepoint techniques
Directory of Open Access Journals (Sweden)
Toro Miguel A
2000-07-01
Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.
A fast algorithm for 3D azimuthally anisotropic velocity scan
Hu, Jingwei
2014-11-11
© 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Weld Residual Stress in Corner Boxing Joints
Kazuyoshi, Matsuoka; Tokuharu, Yoshii; Ship Research Institute, Ministry of Transport; Ship Research Institute, Ministry of Transport
1998-01-01
Fatigue damage often occurs in corner boxing welded joints because of stress concentration and residual stress. The hot spot stress approach is applicable to stress concentration. However, the number of suitable methods for estimating residual stress in welded joints is limited. The purpose of this paper is to clarify the residual stress in corner boxing joints. The method of estimating residual stresses based on the inherent stress technique is presented. Residual stress measurements are per...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Solidification process for sludge residue
International Nuclear Information System (INIS)
Pearce, K.L.
1998-01-01
This report investigates the solidification process used at 100-N Basin to solidify the N Basin sediment and assesses the N Basin process for application to the K Basin sludge residue material. This report also includes a discussion of a solidification process for stabilizing filters. The solidified matrix must be compatible with the Environmental Remediation Disposal Facility acceptance criteria
Machine Arithmetic in Residual Classes,
1981-04-03
rsmainder/residue, as this ascape /-nsues from thp determination of system. It can be. zaalizpd ;n the presence of th- arithmetic urit, which wor~s in thz sys...modules Nj. Page 417. Proof. Proof ascaps /ensues directly from the theorem of Gauss. Actually/really, since according to condition (py, qj)-=-. then
Residual stress in polyethylene pipes
Czech Academy of Sciences Publication Activity Database
Poduška, Jan; Hutař, Pavel; Kučera, J.; Frank, A.; Sadílek, J.; Pinter, G.; Náhlík, Luboš
2016-01-01
Roč. 54, SEP (2016), s. 288-295 ISSN 0142-9418 R&D Projects: GA MŠk LM2015069; GA MŠk(CZ) LQ1601 Institutional support: RVO:68081723 Keywords : polyethylene pipe * residual stress * ring slitting method * lifetime estimation Subject RIV: JL - Materials Fatigue, Friction Mechanics Impact factor: 2.464, year: 2016
Managing woodwaste: Yield from residue
Energy Technology Data Exchange (ETDEWEB)
Nielson, E. [LNS Services, Inc., North Vancouver, British Columbia (Canada); Rayner, S. [Pacific Waste Energy Inc., Burnaby, British Columbia (Canada)
1993-12-31
Historically, the majority of sawmill waste has been burned or buried for the sole purpose of disposal. In most jurisdictions, environmental legislation will prohibit, or render uneconomic, these practices. Many reports have been prepared to describe the forest industry`s residue and its environmental effect; although these help those looking for industry-wide or regional solutions, such as electricity generation, they have limited value for the mill manager, who has the on-hands responsibility for generation and disposal of the waste. If the mill manager can evaluate waste streams and break them down into their usable components, he can find niche market solutions for portions of the plant residue and redirect waste to poor/no-return, rather than disposal-cost, end uses. In the modern mill, residue is collected at the individual machine centre by waste conveyors that combine and mix sawdust, shavings, bark, etc. and send the result to the hog-fuel pile. The mill waste system should be analyzed to determine the measures that can improve the quality of residues and determine the volumes of any particular category before the mixing, mentioned above, occurs. After this analysis, the mill may find a niche market for a portion of its woodwaste.
Leptogenesis and residual CP symmetry
International Nuclear Information System (INIS)
Chen, Peng; Ding, Gui-Jun; King, Stephen F.
2016-01-01
We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.
Solow Residuals Without Capital Stocks
DEFF Research Database (Denmark)
Burda, Michael C.; Severgnini, Battista
2014-01-01
We use synthetic data generated by a prototypical stochastic growth model to assess the accuracy of the Solow residual (Solow, 1957) as a measure of total factor productivity (TFP) growth when the capital stock in use is measured with error. We propose two alternative measurements based on current...
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Wireless communications algorithmic techniques
Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A
2013-01-01
This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Density of primes in l-th power residues
Indian Academy of Sciences (India)
Given a prime number , a finite set of integers S={a1,…,am} and many -th roots of unity ril,i=1,…,m we study the distribution of primes in Q(l) such that the -th residue symbol of ai with respect to is ril, for all . We find out that this is related to the degree of the extension Q(a1l1,…,a1lm)/Q. We give an algorithm ...
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
Radioactive material in residues of health services residues
International Nuclear Information System (INIS)
Costa R, A. Jr.; Recio, J.C.
2006-01-01
The work presents the operational actions developed by the one organ responsible regulator for the control of the material use radioactive in Brazil. Starting from the appearance of coming radioactive material of hospitals and clinical with services of nuclear medicine, material that that is picked up and transported in specific trucks for the gathering of residuals of hospital origin, and guided one it manufactures of treatment of residuals of services of health, where they suffer radiological monitoring before to guide them for final deposition in sanitary embankment, in the city of Sao Paulo, Brazil. The appearance of this radioactive material exposes a possible one violation of the norms that govern the procedures and practices in that sector in the country. (Author)
RECOVERY OF WHEAT RESIDUE NITROGEN 15 AND RESIDUAL ...
African Journals Online (AJOL)
Therefore 85 kg ha-1 N as labelled ammonium sulfate (9.764% atomic excess) was applied in a three-split application. Fertiliser N recovery by wheat in the first year was 33.1%. At harvest, 64.8% of fertiliser N was found in the 0 - 80 cm profile as residual fertiliser-derived N; 2.1% of the applied N could not be accounted for ...
Ball, Stanley
1986-01-01
Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)
Ferguson, David L.; Henderson, Peter B.
1987-01-01
Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…
Improved Approximation Algorithm for
Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz
2014-01-01
We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Algorithmic information theory
Grünwald, P.D.; Vitányi, P.M.B.
2008-01-01
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...
Algorithms for SCC Decomposition
J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)
2008-01-01
htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in
Corral-Corral, Ricardo; Beltrán, Jesús A; Brizuela, Carlos A; Del Rio, Gabriel
2017-10-09
Protein structure and protein function should be related, yet the nature of this relationship remains unsolved. Mapping the critical residues for protein function with protein structure features represents an opportunity to explore this relationship, yet two important limitations have precluded a proper analysis of the structure-function relationship of proteins: (i) the lack of a formal definition of what critical residues are and (ii) the lack of a systematic evaluation of methods and protein structure features. To address this problem, here we introduce an index to quantify the protein-function criticality of a residue based on experimental data and a strategy aimed to optimize both, descriptors of protein structure (physicochemical and centrality descriptors) and machine learning algorithms, to minimize the error in the classification of critical residues. We observed that both physicochemical and centrality descriptors of residues effectively relate protein structure and protein function, and that physicochemical descriptors better describe critical residues. We also show that critical residues are better classified when residue criticality is considered as a binary attribute (i.e., residues are considered critical or not critical). Using this binary annotation for critical residues 8 models rendered accurate and non-overlapping classification of critical residues, confirming the multi-factorial character of the structure-function relationship of proteins.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Energy Technology Data Exchange (ETDEWEB)
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
MODEL FOR THE CORRECTION OF THE SPECIFIC GRAVITY OF BIODIESEL FROM RESIDUAL OIL
Directory of Open Access Journals (Sweden)
Tatiana Aparecida Rosa da Silva
2013-06-01
Full Text Available Biodiesel is a important fuel with economic benefits, social and environmental. The production cost of the biodiesel can be significantly lowered if the raw material is replaced by a alternative material as residual oil. In this study, the variation of specific gravity with temperature increase for diesel and biodiesel from residual oil obtained by homogeneous basic catalysis. All properties analyzed for biodiesel are within specification Brazil. The determination of the correction algorithm for the specific gravity function of temperature is also presented, and the slope of the line to diesel fuel, methylic biodiesel (BMR and ethylic biodiesel (BER from residual oil were respectively the values -0.7089, -0.7290 and -0.7277. This demonstrates the existence of difference of the model when compared chemically different fuels, like diesel and biodiesel from different sources, indicating the importance of determining the specific algorithm for the operations of conversion of volume to the reference temperature.
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Mitrinović, Dragoslav S
1993-01-01
Volume 1, i. e. the monograph The Cauchy Method of Residues - Theory and Applications published by D. Reidel Publishing Company in 1984 is the only book that covers all known applications of the calculus of residues. They range from the theory of equations, theory of numbers, matrix analysis, evaluation of real definite integrals, summation of finite and infinite series, expansions of functions into infinite series and products, ordinary and partial differential equations, mathematical and theoretical physics, to the calculus of finite differences and difference equations. The appearance of Volume 1 was acknowledged by the mathematical community. Favourable reviews and many private communications encouraged the authors to continue their work, the result being the present book, Volume 2, a sequel to Volume 1. We mention that Volume 1 is a revised, extended and updated translation of the book Cauchyjev raeun ostataka sa primenama published in Serbian by Nau~na knjiga, Belgrade in 1978, whereas the greater part ...
De Zan, M M; Gil García, M D; Culzoni, M J; Siano, R G; Goicoechea, H C; Martínez Galera, M
2008-02-01
The effect of piecewise direct standardization (PDS) and baseline correction approaches was evaluated in the performance of multivariate curve resolution (MCR-ALS) algorithm for the resolution of three-way data sets from liquid chromatography with diode-array detection (LC-DAD). First, eight tetracyclines (tetracycline, oxytetracycline, chlorotetracycline, demeclocycline, methacycline, doxycycline, meclocycline and minocycline) were isolated from 250 mL effluent wastewater samples by solid-phase extraction (SPE) with Oasis MAX 500 mg/6 mL cartridges and then separated on an Aquasil C18 150 mm x 4.6mm (5 microm particle size) column by LC and detected by DAD. Previous experiments, carried out with Milli-Q water samples, showed a considerable loss of the most polar analytes (minocycline, oxitetracycline and tetracycline) due to breakthrough. PDS was applied to overcome this important drawback. Conversion of chromatograms obtained from standards prepared in solvent was performed obtaining a high correlation with those corresponding to the real situation (r2 = 0.98). Although the enrichment and clean-up steps were carefully optimized, the sample matrix caused a large baseline drift, and also additive interferences were present at the retention times of the analytes. These problems were solved with the baseline correction method proposed by Eilers. MCR-ALS was applied to the corrected and uncorrected three-way data sets to obtain spectral and chromatographic profiles of each tetracycline, as well as those corresponding to the co-eluting interferences. The complexity of the calibration model built from uncorrected data sets was higher, as expected, and the quality of the spectral and chromatographic profiles was worse.
Calcination/dissolution residue treatment
International Nuclear Information System (INIS)
Knight, R.C.; Creed, R.F.; Patello, G.K.; Hollenberg, G.W.; Buehler, M.F.; O'Rourke, S.M.; Visnapuu, A.; McLaughlin, D.F.
1994-09-01
Currently, high-level wastes are stored underground in steel-lined tanks at the Hanford site. Current plans call for the chemical pretreatment of these wastes before their immobilization in stable glass waste forms. One candidate pretreatment approach, calcination/dissolution, performs an alkaline fusion of the waste and creates a high-level/low-level partition based on the aqueous solubilities of the components of the product calcine. Literature and laboratory studies were conducted with the goal of finding a residue treatment technology that would decrease the quantity of high-level waste glass required following calcination/dissolution waste processing. Four elements, Fe, Ni, Bi, and U, postulated to be present in the high-level residue fraction were identified as being key to the quantity of high-level glass formed. Laboratory tests of the candidate technologies with simulant high-level residues showed reductive roasting followed by carbonyl volatilization to be successful in removing Fe, Ni, and Bi. Subsequent bench-scale tests on residues from calcination/dissolution processing of genuine Hanford Site tank waste showed Fe was separated with radioelement decontamination factors of 70 to 1,000 times with respect to total alpha activity. Thermodynamic analyses of the calcination of five typical Hanford Site tank waste compositions also were performed. The analyses showed sodium hydroxide to be the sole molten component in the waste calcine and emphasized the requirement for waste blending if fluid calcines are to be achieved. Other calcine phases identified in the thermodynamic analysis indicate the significant thermal reconstitution accomplished in calcination
A MEDLINE categorization algorithm
Directory of Open Access Journals (Sweden)
Gehanno Jean-Francois
2006-02-01
Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
Genetic Algorithms and Local Search
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Characterisation and management of concrete grinding residuals.
Kluge, Matt; Gupta, Nautasha; Watts, Ben; Chadik, Paul A; Ferraro, Christopher; Townsend, Timothy G
2018-02-01
Concrete grinding residue is the waste product resulting from the grinding, cutting, and resurfacing of concrete pavement. Potential beneficial applications for concrete grinding residue include use as a soil amendment and as a construction material, including as an additive to Portland cement concrete. Concrete grinding residue exhibits a high pH, and though not hazardous, it is sufficiently elevated that precautions need to be taken around aquatic ecosystems. Best management practices and state regulations focus on reducing the impact on such aquatic environment. Heavy metals are present in concrete grinding residue, but concentrations are of the same magnitude as typically recycled concrete residuals. The chemical composition of concrete grinding residue makes it a useful product for some soil amendment purposes at appropriate land application rates. The presence of unreacted concrete in concrete grinding residue was examined for potential use as partial replacement of cement in new concrete. Testing of Florida concrete grinding residue revealed no dramatic reactivity or improvement in mortar strength.
Polychlorinated Biphenyls (PCB) Residue Effects Database
U.S. Environmental Protection Agency — The PCB Residue Effects (PCBRes) Database was developed to assist scientists and risk assessors in correlating PCB and dioxin-like compound residues with toxic...
Interpretation on Recycling Plastics from Shredder Residue
EPA is considering an interpretation of its regulations that would generally allow for recycling of plastic separated from shredder residue under the conditions described in the Voluntary Procedures for Recycling Plastics from Shredder Residue.
Algorithms for Global Positioning
DEFF Research Database (Denmark)
Borre, Kai; Strang, Gilbert
and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Fatigue Evaluation Algorithms: Review
DEFF Research Database (Denmark)
Passipoularidis, Vaggelis; Brøndsted, Povl
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Likelihood Inflating Sampling Algorithm
Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.
2016-01-01
Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...
Constrained Minimization Algorithms
Lantéri, H.; Theys, C.; Richard, C.
2013-03-01
In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.
ALGORITHM OF OBJECT RECOGNITION
Directory of Open Access Journals (Sweden)
Loktev Alexey Alexeevich
2012-10-01
Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
Energy Technology Data Exchange (ETDEWEB)
COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST
2000-07-19
Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
An efficient algorithm for function optimization: modified stem cells algorithm
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Residual Analysis of Generalized Autoregressive Integrated Moving ...
African Journals Online (AJOL)
In this study, analysis of residuals of generalized autoregressive integrated moving average bilinear time series model was considered. The adequacy of this model was based on testing the estimated residuals for whiteness. Jarque-Bera statistic and squared-residual autocorrelations were used to test the estimated ...
9 CFR 311.39 - Biological residues.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Biological residues. 311.39 Section... Biological residues. Carcasses, organs, or other parts of carcasses of livestock shall be condemned if it is determined that they are adulterated because of the presence of any biological residues. ...
Cycling of grain legume residue nitrogen
DEFF Research Database (Denmark)
Jensen, E.S.
1995-01-01
Symbiotic nitrogen fixation by legumes is the main input of nitrogen in ecological agriculture. The cycling of N-15-labelled mature pea (Pisum sativum L.) residues was studied during three years in small field plots and lysimeters. The residual organic labelled N declined rapidly during the initial...... management methods in order to conserve grain legume residue N sources within the soil-plant system....
Neutron residual stress measurements in linepipe
Law, Michael; Gnaëpel-Herold, Thomas; Luzin, Vladimir; Bowie, Graham
2006-11-01
Residual stresses in gas pipelines are generated by manufacturing and construction processes and may affect the subsequent pipe integrity. In the present work, the residual stresses in eight samples of linepipe were measured by neutron diffraction. Residual stresses changed with some coating processes. This has special implications in understanding and mitigating stress corrosion cracking, a major safety and economic problem in some gas pipelines.
Neutron residual stress measurements in linepipe
International Nuclear Information System (INIS)
Law, Michael; Gnaepel-Herold, Thomas; Luzin, Vladimir; Bowie, Graham
2006-01-01
Residual stresses in gas pipelines are generated by manufacturing and construction processes and may affect the subsequent pipe integrity. In the present work, the residual stresses in eight samples of linepipe were measured by neutron diffraction. Residual stresses changed with some coating processes. This has special implications in understanding and mitigating stress corrosion cracking, a major safety and economic problem in some gas pipelines
Glycogen is large molecules wherein Glucose residues
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Glycogen is large molecules wherein Glucose residues. Glycogen is large molecules wherein Glucose residues. linked by α-(1- 4) glycosidic bonds into chains and chains. branch via α-(1- 6) linkage. Branching points are about every fourth residue – allows. glucose ...
Iterative Algorithms for Nonexpansive Mappings
Directory of Open Access Journals (Sweden)
Yao Yonghong
2008-01-01
Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .
Park, Keecheol; Oh, Kyungsuk
2017-09-01
In order to investigate the effect of leveling conditions on residual stress evolution during the leveling process of hot rolled high strength steels, the in-plane residual stresses of sheet processed under controlled conditions at skin-pass mill and levelers were measured by cutting method. The residual stress was localized near the edge of sheet. As the thickness of sheet was increased, the residual stress occurred region was expanded. The magnitude of residual stress within the sheet was reduced as increasing the deformation occurred during the leveling process. But the residual stress itself was not removed completely. The magnitude of camber occurred at cut plate was able to be predicted by the residual stress distribution. A numerical algorithm was developed for analysing the effect of leveling conditions on residual stress. It was able to implement the effect of plastic deformation in leveling, tension, work roll bending, and initial state of sheet (residual stress and curl distribution). The validity of simulated results was verified from comparison with the experimentally measured residual stress and curl in a sheet.
Optimisation of centroiding algorithms for photon event counting imaging
International Nuclear Information System (INIS)
Suhling, K.; Airey, R.W.; Morgan, B.L.
1999-01-01
Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
Parallel Architectures and Bioinspired Algorithms
Pérez, José; Lanchares, Juan
2012-01-01
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Recent results on howard's algorithm
DEFF Research Database (Denmark)
Miltersen, P.B.
2012-01-01
Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
Plataniotis K. N.
1997-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Natural radioactivity in petroleum residues
International Nuclear Information System (INIS)
Gazineu, M.H.P.; Gazineu, M.H.P.; Hazin, C.A.; Hazin, C.A.
2006-01-01
The oil extraction and production industry generates several types of solid and liquid wastes. Scales, sludge and water are typical residues that can be found in such facilities and that can be contaminated with Naturally Occurring Radioactive Material (N.O.R.M.). As a result of oil processing, the natural radionuclides can be concentrated in such residues, forming the so called Technologically Enhanced Naturally Occurring Radioactive Material, or T.E.N.O.R.M.. Most of the radionuclides that appear in oil and gas streams belong to the 238 U and 232 Th natural series, besides 40 K. The present work was developed to determine the radionuclide content of scales and sludge generated during oil extraction and production operations. Emphasis was given to the quantification of 226 Ra, 228 Ra and 40 K since these radionuclides,are responsible for most of the external exposure in such facilities. Samples were taken from the P.E.T.R.O.B.R.A.S. unity in the State of Sergipe, in Northeastern Brazil. They were collected directly from the inner surface of water pipes and storage tanks, or from barrels stored in the waste storage area of the E and P unit. The activity concentrations for 226 Ra, 228 Ra and 40 K were determined by using an HP Ge gamma spectrometric system. The results showed concentrations ranging from 42.7 to 2,110.0 kBq/kg for 226 Ra, 40.5 to 1,550.0 kBq/kg for 228 Ra, and 20.6 to 186.6 kBq/kg for 40 K. The results highlight the importance of determining the activity concentration of those radionuclides in oil residues before deciding whether they should be stored or discarded to the environment. (authors)
Natural radioactivity in petroleum residues
Energy Technology Data Exchange (ETDEWEB)
Gazineu, M.H.P. [UNICAP, Dept. de Quimica, Recife (Brazil); Gazineu, M.H.P.; Hazin, C.A. [UFPE, Dept. de Energia Nuclear, Recife (Brazil); Hazin, C.A. [Centro Regional de Ciencias Nucleares/ CNEN, Recife (Brazil)
2006-07-01
The oil extraction and production industry generates several types of solid and liquid wastes. Scales, sludge and water are typical residues that can be found in such facilities and that can be contaminated with Naturally Occurring Radioactive Material (N.O.R.M.). As a result of oil processing, the natural radionuclides can be concentrated in such residues, forming the so called Technologically Enhanced Naturally Occurring Radioactive Material, or T.E.N.O.R.M.. Most of the radionuclides that appear in oil and gas streams belong to the {sup 238}U and {sup 232}Th natural series, besides 40 K. The present work was developed to determine the radionuclide content of scales and sludge generated during oil extraction and production operations. Emphasis was given to the quantification of {sup 226}Ra, {sup 228}Ra and 40 K since these radionuclides,are responsible for most of the external exposure in such facilities. Samples were taken from the P.E.T.R.O.B.R.A.S. unity in the State of Sergipe, in Northeastern Brazil. They were collected directly from the inner surface of water pipes and storage tanks, or from barrels stored in the waste storage area of the E and P unit. The activity concentrations for {sup 226}Ra, {sup 228}Ra and 40 K were determined by using an HP Ge gamma spectrometric system. The results showed concentrations ranging from 42.7 to 2,110.0 kBq/kg for {sup 226}Ra, 40.5 to 1,550.0 kBq/kg for {sup 228}Ra, and 20.6 to 186.6 kBq/kg for 40 K. The results highlight the importance of determining the activity concentration of those radionuclides in oil residues before deciding whether they should be stored or discarded to the environment. (authors)
Residual Liquefaction under Standing Waves
DEFF Research Database (Denmark)
Kirca, V.S. Ozgur; Sumer, B. Mutlu; Fredsøe, Jørgen
2012-01-01
This paper summarizes the results of an experimental study which deals with the residual liquefaction of seabed under standing waves. It is shown that the seabed liquefaction under standing waves, although qualitatively similar, exhibits features different from that caused by progressive waves....... The experimental results show that the buildup of pore-water pressure and the resulting liquefaction first starts at the nodal section and spreads towards the antinodal section. The number of waves to cause liquefaction at the nodal section appears to be equal to that experienced in progressive waves for the same...
Process to recycle shredder residue
Jody, Bassam J.; Daniels, Edward J.; Bonsignore, Patrick V.
2001-01-01
A system and process for recycling shredder residue, in which separating any polyurethane foam materials are first separated. Then separate a fines fraction of less than about 1/4 inch leaving a plastics-rich fraction. Thereafter, the plastics rich fraction is sequentially contacted with a series of solvents beginning with one or more of hexane or an alcohol to remove automotive fluids; acetone to remove ABS; one or more of EDC, THF or a ketone having a boiling point of not greater than about 125.degree. C. to remove PVC; and one or more of xylene or toluene to remove polypropylene and polyethylene. The solvents are recovered and recycled.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Compression through decomposition into browse and residual images
Novik, Dmitry A.; Tilton, James C.; Manohar, M.
1993-01-01
Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.
Fault Severity Estimation of Rotating Machinery Based on Residual Signals
Directory of Open Access Journals (Sweden)
Fan Jiang
2012-01-01
Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.
Vorst, H.A. van der; Ye, Q.
1999-01-01
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in
International Nuclear Information System (INIS)
Mun, M. K.; Lee, C. H.; Em, V. T.
2001-01-01
In order to nondestructively measure in-depth residual stress distribution of the metallic materials, it is unique method to use neutron diffraction. In this paper the principles of residual stress measurements by neutron diffraction is described. The residual stress distribution of welded strainless steeel 304 plate using te HANARO residual stress instrument is also described
40 CFR 721.4500 - Isopropylamine distillation residues and ethylamine distillation residues.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Isopropylamine distillation residues and ethylamine distillation residues. 721.4500 Section 721.4500 Protection of Environment... residues and ethylamine distillation residues. (a) Chemical substances and significant new use subject to...
Residual analysis for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Turner, R.; Møller, Jesper
process. Residuals are ascribed to locations in the empty background, as well as to data points of the point pattern. We obtain variance formulae, and study standardised residuals. There is also an analogy between our spatial residuals and the usual residuals for (non-spatial) generalised linear models...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....
Cycling of grain legume residue nitrogen
DEFF Research Database (Denmark)
Jensen, E.S.
1995-01-01
weeks of decomposition, due to high rates of residue N net mineralization and subsequent leaching and denitrification losses of N. Lysimeter experiments showed that pea residues may reduce leaching losses of N, probably due to their effect on the mineralization-immobilizalion turnover of N...... and denitrification. Winter barley succeeding field pea recovered 13% of the incorporated pea residue N by early December; the recovery was found to be 15% at maturity in July. A spring-sown crop of barley recovered less than half the amount of pea residue N recovered by winter barley. The residue N-use efficiencies...
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...
Monitoring antibiotic residues in honey
Directory of Open Access Journals (Sweden)
Monica Cristina Cara,
2011-12-01
Full Text Available Next to the beta-lactam antibiotics in veterinary medicine, streptomycin is one of the mostly used antibiotics. High concentration of streptomycin could lead to ototoxic and nephrotoxic effects. Low concentration – as found in food – may cause allergies, destroy the intestinal flora and favor immunity to some pathogenic microorganisms. In 1948 chlortetracycline was isolated by Duggan as a metabolite and this was the first antibiotic substance of the group of tetracyclines. In the present paper there are presented the monitoring of the antibiotic residues in honey from Timis County. The residues of tetracycline and streptomycin in honey were determined by the method ELISA – a quantitative method of detection. The microtitre wells are coated with tetracycline and anti-streptomycin antibodies. Free antibiotic and immobilized antibiotic compete with the added antibiotic antibody (competitive immunoassay reaction. Any unbound antibody is then removed in a washing step. Bound conjugate enzymes convert the colorless chromogen into a blue product. The addition ofthe stop reagent leads to a color change from blue to yellow. The measurement is made photometrically at 450 nm. The absorption is inversely proportional to the antibiotic concentration in the sample.
Residual Stresses In 3013 Containers
International Nuclear Information System (INIS)
Mickalonis, J.; Dunn, K.
2009-01-01
The DOE Complex is packaging plutonium-bearing materials for storage and eventual disposition or disposal. The materials are handled according to the DOE-STD-3013 which outlines general requirements for stabilization, packaging and long-term storage. The storage vessels for the plutonium-bearing materials are termed 3013 containers. Stress corrosion cracking has been identified as a potential container degradation mode and this work determined that the residual stresses in the containers are sufficient to support such cracking. Sections of the 3013 outer, inner, and convenience containers, in both the as-fabricated condition and the closure welded condition, were evaluated per ASTM standard G-36. The standard requires exposure to a boiling magnesium chloride solution, which is an aggressive testing solution. Tests in a less aggressive 40% calcium chloride solution were also conducted. These tests were used to reveal the relative stress corrosion cracking susceptibility of the as fabricated 3013 containers. Significant cracking was observed in all containers in areas near welds and transitions in the container diameter. Stress corrosion cracks developed in both the lid and the body of gas tungsten arc welded and laser closure welded containers. The development of stress corrosion cracks in the as-fabricated and in the closure welded container samples demonstrates that the residual stresses in the 3013 containers are sufficient to support stress corrosion cracking if the environmental conditions inside the containers do not preclude the cracking process.
Residual Fragments after Percutaneous Nephrolithotomy
Directory of Open Access Journals (Sweden)
Kaan Özdedeli
2012-09-01
Full Text Available Clinically insignificant residual fragments (CIRFs are described as asymptomatic, noninfectious and nonobstructive stone fragments (≤4 mm remaining in the urinary system after the last session of any intervention (ESWL, URS or PCNL for urinary stones. Their insignificance is questionable since CIRFs could eventually become significant, as their presence may result in recurrent stone growth and they may cause pain and infection due to urinary obstruction. They may become the source of persistent infections and a significant portion of the patients will have a stone-related event, requiring auxilliary interventions. CT seems to be the ultimate choice of assessment. Although there is no concensus about the timing, recent data suggests that it may be performed one month after the procedure. However, imaging can be done in the immediate postoperative period, if there are no tubes blurring the assessment. There is some evidence indicating that selective medical therapy may have an impact on decreasing stone formation rates. Retrograde intrarenal surgery, with its minimally invasive nature, seems to be the best way to deal with residual fragments.
Lin Zhang; Na Yin; Xiong Fu; Qiaomin Lin; Ruchuan Wang
2017-01-01
With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems....
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Kantak, Anil V.
1993-01-01
A novel algorithm to obtain all signal components of a residual carrier signal with any number of channels is presented. The phase modulation type may be NRZ-L or split phase (Manchester). The algorithm also provides a simple way to obtain the power contents of the signal components. Steps to recognize the signal components that influence the carrier tracking loop and the data tracking loop at the receiver are given. A computer program for numerical computation is also provided.
Applications of algorithmic differentiation to phase retrieval algorithms.
Jurling, Alden S; Fienup, James R
2014-07-01
In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
MORTAR WITH UNSERVICEABLE TIRE RESIDUES
Directory of Open Access Journals (Sweden)
J. A. Canova
2009-01-01
Full Text Available This study analyzes the effects of unserviceable tire residues on rendering mortar using lime and washed sand at a volumetric proportion of 1:6. The ripened composite was dried in an oven and combined with both cement at a volumetric proportion of 1:1.5:9 and rubber powder in proportional aggregate volumes of 6, 8, 10, and 12%. Water exudation was evaluated in the plastic state. Water absorption by capillarity, fresh shrinkage and mass loss, restrained shrinkage and mass loss, void content, flexural strength, and deformation energy under compression were evaluated in the hardened state. There was an improvement in the water exudation and water absorption by capillarity and drying shrinkage, as well as a reduction of the void content and flexural strength. The product studied significantly aided the water exudation from mortar and, capillary elevation in rendering.
MORTAR WITH UNSERVICEABLE TIRE RESIDUES
Directory of Open Access Journals (Sweden)
José Aparecido Canova
2009-12-01
Full Text Available This study analyzes the effects of unserviceable tire residues on rendering mortar using lime and washed sand at a volumetric proportion of 1:6. The ripened composite was dried in an oven and combined with both cement at a volumetric proportion of 1:1.5:9 and rubber powder in proportional aggregate volumes of 6, 8, 10, and 12%. Water exudation was evaluated in the plastic state. Water absorption by capillarity, fresh shrinkage and mass loss, restrained shrinkage and mass loss, void content, flexural strength, and deformation energy under compression were evaluated in the hardened state. There was an improvement in the water exudation and water absorption by capillarity and drying shrinkage, as well as a reduction of the void content and flexural strength. The product studied significantly aided the water exudation from mortar and, capillary elevation in rendering.
Landfill Mining of Shredder Residues
DEFF Research Database (Denmark)
Hansen, Jette Bjerre; Hyks, Jiri; Shabeer Ahmed, Nassera
In Denmark, shredder residues (SR) are classified as hazardous waste and until January 2012 the all SR were landfilled. It is estimated that more than 1.8 million tons of SR have been landfilled in mono cells. This paper describes investigations conducted at two Danish landfills. SR were excavated...... from the landfills and size fractionated in order to recover potential resources such as metal and energy and to reduce the amounts of SR left for re-landfilling. Based on the results it is estimated that 60-70% of the SR excavated could be recovered in terms of materials or energy. Only a fraction...... with particle size less than 5 mm needs to be re-landfilled at least until suitable techniques are available for recovery of materials with small particle sizes....
Forest residues in cattle feed
Directory of Open Access Journals (Sweden)
João Elzeário Castelo Branco Iapichini
2012-12-01
Full Text Available The ruminants are capable of converting low-quality food, when they are complementes with high-energy source. Through the use of regional agricultural residues is possible to conduct more economical production systems, since energetic foods have high cost in animal production. There is very abundant availability of residues in agroforestry activities worldwide, so that if a small fraction of them were used with appropriate technical criteria they could largely meet the needs of existing herds in the world and thus meet the demands of consumption of protein of animal origin. The Southwest Region of São Paulo State has large area occupied by reforestation and wide availability of non-timber forest residues, which may represent more concentrated energetic food for ruminant production. This experiment aimed to evaluate the acceptability of ground pine (20, 30 and 40%, replacing part of the energetic food (corn, present in the composition of the concentrate and was performed at the Experimental Station of Itapetininga - Forest Institute / SMA, in the dry season of 2011. It were used four crossbred steers, mean 18 months old, average body weight of 250 kg, housed in a paddock provided with water ad libitum and covered troughs for supplementation with the experimental diet. The adjustment period of the animals was of 07 days and the measurement of the levels of consumption, physiological changes, acceptability and physiological parameters were observed during the following 25 days. The concentrate supplement was formulated based on corn (76.2%, Soybean Meal (20%, urea (2%, Ammonium sulfate (0.4%, calcite (1.4%, Mineral Core (1% and finely ground Pine Cone, replacing corn. In preparing food, the formulas were prepared to make them isoproteic/energetic, containing the following nutrient levels: 22% Crude Protein (CP and 79% of Total Nutrients (TDN. The animals received the supplement in three steps for each level of cone replaced, being offered in the
Algorithms and their others: Algorithmic culture in context
Directory of Open Access Journals (Sweden)
Paul Dourish
2016-08-01
Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks
Directory of Open Access Journals (Sweden)
Peng Jiang
2017-01-01
Full Text Available For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA. After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS. The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues.
Perišić, Ognjen
2018-03-16
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM's weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol's ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner's interacting scaffold is more flexible and adaptable.
Fighting Censorship with Algorithms
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
Detection of antibiotic residues in poultry meat.
Sajid, Abdul; Kashif, Natasha; Kifayat, Nasira; Ahmad, Shabeer
2016-09-01
The antibiotic residues in poultry meat can pose certain hazards to human health among them are sensitivity to antibiotics, allergic reactions, mutation in cells, imbalance of intestinal micro biota and bacterial resistance to antibiotics. The purpose of the present paper was to detect antibiotic residue in poultry meat. During the present study a total of 80 poultry kidney and liver samples were collected and tested for detection of different antibiotic residues at different pH levels Eschericha coli at pH 6, 7 and Staphyloccocus aureus at pH 8 & 9. Out of 80 samples only 4 samples were positive for antibiotic residues. The highest concentrations of antibiotic residue found in these tissues were tetracycline (8%) followed by ampicilin (4%), streptomycine (2%) and aminoglycosides (1%) as compared to other antibiotics like sulfonamides, neomycine and gentamycine. It was concluded that these microorganism at these pH levels could be effectively used for detection of antibiotic residues in poultry meat.
Distribution of residues and primitive roots
Indian Academy of Sciences (India)
Replacing the function f by g, we get the required estimate for N(p, N). D. Proof of Theorem 1.1. When p = 7, we clearly see that (1, 2) is a consecutive pair of quadratic residue modulo 7. Assume that p ≥ 11. If 10 is a quadratic residue modulo p, then we have (9, 10) as a consecutive pair of quadratic residues modulo p, ...
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Genetic Algorithms in Noisy Environments
THEN, T. W.; CHONG, EDWIN K. P.
1993-01-01
Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Fuzzy HRRN CPU Scheduling Algorithm
Bashir Alam; R. Biswas; M. Alam
2011-01-01
There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.
Artificial Neural Networks and Concentration Residual Augmented ...
African Journals Online (AJOL)
Artificial Neural Networks and Concentration Residual Augmented Classical Least Squares for the Simultaneous Determination of Diphenhydramine, Benzonatate, Guaifenesin and Phenylephrine in their Quaternary Mixture.
RESIDUES IN CARROTS TREATED WITH LINURON
DEFF Research Database (Denmark)
Løkke, Hans
1974-01-01
Investigations have been carried out on residues of linuron and its breakdown products in carrots sprayed with Jinuron at 1, 2, or 4 kg a.i./ha, 0, 19, 28, 36 or 60 days after sowing (up to 57 days before harvesting). The extracted residues were separated into three fractions by liquid......,4-dichloroaniline and iodide ion, followed by gas chromatography with electron capture detector. Only 5-13% of the extract-able residues were breakdown products. Most of the detectable residue (87-95%) was identified as linuron. The relative proportions of linuron and breakdown products in carrots at the time...
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
A polynomial time algorithm for solving the maximum flow problem in directed networks
International Nuclear Information System (INIS)
Tlas, M.
2015-01-01
An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m 2 r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)
Deep residual networks of residual networks for image super-resolution
Wei, Xueqi; Yang, Fumeng; Wu, Congzhong
2017-11-01
Single image super-resolution (SISR), which aims at obtaining a high-resolution image from a single low-resolution image, is a classical problem in computer vision. In this paper, we address this problem based on a deep learning method with residual learning in an end-to-end manner. We propose a novel residual-network architecture, Residual networks of Residual networks (RoR), to promote the learning capability of residual networks for SISR. In residual network, the signal can be directly propagated from one unit to any other units in both forward and backward passes when using identity mapping as the skip connections. Based on it, we add level-wise connections upon original residual networks, to dig the optimization ability of residual networks. Our experiments demonstrate the effectiveness and versatility of RoR, it can get a faster convergence speed and gain higher resolution accuracy from considerably increased depth.
Ruijter, de F.J.; Huijsmans, J.F.M.
2012-01-01
This paper gives an overview of available literature data on ammonia volatilization from crop residues. From these data, a relation is derived for the ammonia emission depending on the N-content of crop residue.
Process for measuring residual stresses
International Nuclear Information System (INIS)
Elfinger, F.X.; Peiter, A.; Theiner, W.A.; Stuecker, E.
1982-01-01
No single process can at present solve all problems. The complete destructive processes only have a limited field of application, as the component cannot be reused. However, they are essential for the basic determination of stress distributions in the field of research and development. Destructive and non-destructive processes are mainly used if investigations have to be carried out on original components. With increasing component size, the part of destructive tests becomes smaller. The main applications are: quality assurance, testing of manufactured parts and characteristics of components. Among the non-destructive test procedures, X-raying has been developed most. It gives residual stresses on the surface and on surface layers near the edges. Further development is desirable - in assessment - in measuring techniques. Ultrasonic and magnetic crack detection processes are at present mainly used in research and development, and also in quality assurance. Because of the variable depth of penetration and the possibility of automation they are gaining in importance. (orig./RW) [de
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MSE mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. [Figure not available: see fulltext.
Backtrack Orbit Search Algorithm
Knowles, K.; Swick, R.
2002-12-01
A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.
Diagnostic algorithm for syncope.
Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon
2014-09-01
Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.
Toward an Algorithmic Pedagogy
Directory of Open Access Journals (Sweden)
Holly Willis
2007-01-01
Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?
National Aeronautics and Space Administration — This proposed research addresses the problem of optimal maneuver detection and reconstruction with regards to an astrodynamics application. Maneuver detection and...
Tank 12H residuals sample analysis report
Energy Technology Data Exchange (ETDEWEB)
Oji, L. N. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Shine, E. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Diprete, D. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hay, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-06-11
The Savannah River National Laboratory (SRNL) was requested by Savannah River Remediation (SRR) to provide sample preparation and analysis of the Tank 12H final characterization samples to determine the residual tank inventory prior to grouting. Eleven Tank 12H floor and mound residual material samples and three cooling coil scrape samples were collected and delivered to SRNL between May and August of 2014.
Does Bt Corn Really Produce Tougher Residues
Bt corn hybrids produce insecticidal proteins that are derived from a bacterium, Bacillus thuringiensis. There have been concerns that Bt corn hybrids produce residues that are relatively resistant to decomposition. We conducted four experiments that examined the decomposition of corn residues und...
Residual stresses in steel and zirconium weldments
International Nuclear Information System (INIS)
Root, J.H.; Coleman, C.E.; Bowden, J.W.
1997-01-01
Three-dimensional scans of residual stress within intact weldments provide insight into the consequences of various welding techniques and stress-relieving procedures. The neutron diffraction method for nondestructive evaluation of residual stresses has been applied to a circumferential weld in a ferritic steel pipe of outer diameter 114 mm and thickness 8.6 mm. The maximum tensile stresses, 250 MPa in the hoop direction, are found at mid-thickness of the fusion zone. The residual stresses approach zero within 20 mm from the weld center. The residual stresses caused by welding zirconium alloy components are partially to blame for failures due to delayed-hydride cracking. Neutron diffraction measurements in a GTA-welded Zr-2.5 Nb plate have shown that heat treatment at 530 C for 1 h reduces the longitudinal residual strain by 60%. Neutron diffraction has also been used to scan the residual stresses near circumferential electron beam welds in irradiated and unirradiated Zr-2.5 Nb pressure tubes. The residual stresses due to electron beam welding appear to be lower than 130 MPa, even in the as-welded state. No significant changes occur in the residual stress pattern of the electron-beam welded tube, during a prolonged exposure to thermal neutrons and the temperatures typical of an operating nuclear reactor
Densification of FL Chains via Residuated Frames
Czech Academy of Sciences Publication Activity Database
Baldi, Paolo; Terui, K.
2016-01-01
Roč. 75, č. 2 (2016), s. 169-195 ISSN 0002-5240 R&D Projects: GA ČR GAP202/10/1826 Keywords : densifiability * standard completeness * residuated lattices * residuated frames * fuzzy logic Subject RIV: BA - General Mathematics Impact factor: 0.625, year: 2016
Spatial resolution enhancement residual coding using hybrid ...
Indian Academy of Sciences (India)
a normal video frames possess distinct characteristics compared to a residual frame. In this paper, we .... analyze the characteristics of IP, MC and RE residuals (Kamisli 2010; Rao et al 2007). The estimation ..... Eslami R and Radha H 2007 A new family of nonredundant transforms using hybrid wavelets and directional filter ...
Semantic Tagging with Deep Residual Networks
Bjerva, Johannes; Plank, Barbara; Bos, Johan
2016-01-01
We propose a novel semantic tagging task, semtagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate
Soil water evaporation and crop residues
Crop residues have value when left in the field and also when removed from the field and sold as a commodity. Reducing soil water evaporation (E) is one of the benefits of leaving crop residues in place. E was measured beneath a corn canopy at the soil suface with nearly full coverage by corn stover...
Unicystic ameloblastoma arising from a residual cyst
Mahajan, Amit D; Manjunatha, Bhari Sharanesha; Khurana, Neha M; Shah, Navin
2014-01-01
Intraoral swellings involving alveolar ridges in edentulous patients are clinically diagnosed as residual cysts, traumatic bone cysts, Stafne's jaw bone cavity, ameloblastoma and metastatic tumours of the jaw. This case report describes a residual cyst in a 68-year-old edentulous male patient which was enucleated and histopathologically confirmed as a unicystic ameloblastoma. PMID:25199192
Electrodialytic remediation of air pollution control residues
DEFF Research Database (Denmark)
Jensen, Pernille Erland
Air pollution control (APC) residue from municipal solid waste incineration (MSWI) consists of the fly ash, and, in dry and semi-dry systems, also the reaction products from the flue gas cleaning process. APC residue is considered a hazardous waste due to its high alkalinity, high content of salts...
Distribution of residues and primitive roots
Indian Academy of Sciences (India)
quadratic residues and non-residues cases using some refinement of van der Warden's the- orem in combinatorial number theory. Therefore, in his proof, the constant p0(N) depends on the van der Warden number, which is very difficult to calculate for all N. For instance, recently, Luca and Thangadurai [8] proved that for all ...
Bioaccumulation and distribution of organochlorine residues across ...
African Journals Online (AJOL)
The transfer of organochlorine residues in the food chain and its distribution in the trophic levels was influenced by habitat, environmental conditions, feeding habit and biochemical composition of individual populations. The total residual concentration of OCPs in shellfish and fish ranged between 0.16 ppm and 0.69 ppm.
Power from wastewater and residual products
DEFF Research Database (Denmark)
Krogh-Jeppesen, K.
2007-01-01
Microbial fuel cells utilise wastewater and residual products from the pretreatment of straw to generate power. Denmark could lead the way......Microbial fuel cells utilise wastewater and residual products from the pretreatment of straw to generate power. Denmark could lead the way...
Residuals Management and Water Pollution Control Planning.
Environmental Protection Agency, Washington, DC. Office of Public Affairs.
This pamphlet addresses the problems associated with residuals and water quality especially as it relates to the National Water Pollution Control Program. The types of residuals and appropriate management systems are discussed. Additionally, one section is devoted to the role of citizen participation in developing management programs. (CS)
Streaming Algorithms for Line Simplification
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Hachenberger, Peter
2010-01-01
this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...
Echo Cancellation I: Algorithms Simulation
Directory of Open Access Journals (Sweden)
P. Sovka
2000-04-01
Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
International Nuclear Information System (INIS)
Grady, M.
1986-01-01
I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs
Global alignment algorithms implementations | Fatumo ...
African Journals Online (AJOL)
In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.
Recovery Rate of Clustering Algorithms
Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S
2009-01-01
This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...
Quantum algorithms and learning theory
Arunachalam, S.
2018-01-01
This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
On exact algorithms for treewidth
Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.
2006-01-01
We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a
Cascade Error Projection Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
77 FR 24671 - Compliance Guide for Residue Prevention and Agency Testing Policy for Residues
2012-04-25
... Food Safety and Inspection Service Compliance Guide for Residue Prevention and Agency Testing Policy for Residues AGENCY: Food Safety and Inspection Service, USDA. ACTION: Notice of availability and... availability of a compliance guide for the prevention of violative residues in livestock slaughter...
Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.
Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay
2013-07-01
In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.
Residual stress measurement for injection molded components
Directory of Open Access Journals (Sweden)
Achyut Adhikari
2016-07-01
Full Text Available Residual stress induced during manufacturing of injection molded components such as polymethyl methacrylate (PMMA affects the mechanical and optical properties of these components. These residual stresses can be visualized and quantified by measuring their birefringence. In this paper, a low birefringence polariscope (LBP is used to measure the whole-field residual stress distribution of these injection molded specimens. Detailed analytical and experimental study is conducted to quantify the residual stress measurement in these materials. A commercial birefringence measurement system was used to validate the results obtained to our measurement system. This study can help in material diagnosis for quality and manufacturing purpose and be useful for understanding of residual stress in imaging or other applications.
Novel medical image enhancement algorithms
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
Residual stresses in zircaloy welds
International Nuclear Information System (INIS)
Santisteban, J. R.; Fernandez, L; Vizcaino, P.; Banchik, A.D.; Samper, R; Martinez, R. L; Almer, J; Motta, A.T.; Colas, K.B; Kerr, M.; Daymond, M.R
2009-01-01
Welds in Zirconium-based alloys are susceptible to hydrogen embrittlement, as H enters the material due to dissociation of water. The yield strain for hydride cracking has a complex dependence on H concentration, stress state and texture. The large thermal gradients produced by the applied heat; drastically changes the texture of the material in the heat affected zone, enhancing the susceptibility to delayed hydride cracking. Normally hydrides tend to form as platelets that are parallel to the normal direction, but when welding plates, hydride platelets may form on cooling with their planes parallel to the weld and through the thickness of the plates. If, in addition to this there are significant tensile stresses, the susceptibility of the heat affected zone to delayed hydride cracking will be increased. Here we have measured the macroscopic and microscopic residual stressed that appear after PLASMA welding of two 6mm thick Zircaloy-4 plates. The measurements were based on neutron and synchrotron diffraction experiments performed at the Isis Facility, UK, and at Advanced Photon Source, USA, respectively. The experiments allowed assessing the effect of a post-weld heat treatment consisting of a steady increase in temperature from room temperature to 450oC over a period of 4.5 hours; followed by cooling with an equivalent cooling rate. Peak tensile stresses of (175± 10) MPa along the longitudinal direction were found in the as-welded specimen, which were moderately reduced to (150±10) MPa after the heat-treatment. The parent material showed intergranular stresses of (56±4) MPa, which disappeared on entering the heat-affected zone. In-situ experiments during themal cyclong of the material showed that these intergranular stresses result from the anisotropy of the thermal expansion coefficient of the hexagonal crystal lattice. [es
Residual complaints after neuralgic amyotrophy.
Cup, Edith H; Ijspeert, Jos; Janssen, Renske J; Bussemaker-Beumer, Chaska; Jacobs, Joost; Pieterse, Allan J; van der Linde, Harmen; van Alfen, Nens
2013-01-01
To develop recommendations regarding outcome measures and topics to be addressed in rehabilitation for persons with neuralgic amyotrophy (NA), this study explored which functions and activities are related to persisting pain in NA and which questionnaires best capture these factors. A questionnaire-based survey from 2 cross-sectional cohorts, one of patients visiting the neurology outpatient clinic and a cohort seen at a multidisciplinary plexus clinic. Two tertiary referral clinics based in the Department of Neurology and Rehabilitation from a university medical center provided the data. A referred sample of patients (N=248) with either idiopathic or hereditary NA who fulfilled the criteria for this disorder, in whom the last episode of NA had been at least 6 months ago and included brachial plexus involvement. Not applicable. Two custom clinical screening questionnaires were used as well as the Shoulder Rating Questionnaire-Dutch Language Version, the Shoulder Pain and Disability Index (SPADI), the Shoulder Disability Questionnaire (SDQ), and Overall Disability Sum Score. The survey confirms the high prevalence of persisting pain and impairments. More than half of the patients were restricted by pain, while in those without pain 60% experienced residual paresis. Correlations show an intimate relation between pain, scapular instability, problems with overhead activities, and increased fatigability. A standard physical therapy approach was ineffective or aggravated symptoms in more than 50%. Pain and fatigue are strongly correlated to persisting scapular instability and increased fatigability of the affected muscles in NA. Our results suggest that an integrated rehabilitation approach is needed in which all of these factors are addressed. We further recommend using the SPADI and SDQ in future studies to evaluate the natural course and treatment effects in NA. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights
Residual DPCM about Motion Compensated Residual Signal for H.264 Lossless Coding
Han, Ki-Hun; Rao, Kamisetty R.; Lee, Yung-Lyul
In this letter, a new Inter lossless coding method based on a residual DPCM (Differential Pulse Code Modulation) is proposed to improve compression ratio in the H.264 standard. Since the spatial correlation in a residual block can be further exploited among the residual signals after motion estimation/compensation, horizontal or vertical DPCM in the residual signals can be applied to further reduce the magnitudes of the residual signals. The proposed method reduces the average bitrates of 3.5% compared with the Inter lossless coding of the H.264 standard.
A versatile multi-objective FLUKA optimization using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Vlachoudis Vasilis
2017-01-01
Full Text Available Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.
Detecting circumbinary planets: A new quasi-periodic search algorithm
Directory of Open Access Journals (Sweden)
Pollacco D.
2013-04-01
Full Text Available We present a search method based around the grouping of data residuals, suitable for the detection of many quasi-periodic signals. Combined with an efficient and easily implemented method to predict the maximum transit timing variations of a transiting circumbinary exoplanet, we form a fast search algorithm for such planets. We here target the Kepler dataset in particular, where all the transiting examples of circumbinary planets have been found to date. The method is presented and demonstrated on two known systems in the Kepler data.
Novel feature for catalytic protein residues reflecting interactions with other residues.
Directory of Open Access Journals (Sweden)
Yizhou Li
Full Text Available Owing to their potential for systematic analysis, complex networks have been widely used in proteomics. Representing a protein structure as a topology network provides novel insight into understanding protein folding mechanisms, stability and function. Here, we develop a new feature to reveal correlations between residues using a protein structure network. In an original attempt to quantify the effects of several key residues on catalytic residues, a power function was used to model interactions between residues. The results indicate that focusing on a few residues is a feasible approach to identifying catalytic residues. The spatial environment surrounding a catalytic residue was analyzed in a layered manner. We present evidence that correlation between residues is related to their distance apart most environmental parameters of the outer layer make a smaller contribution to prediction and ii catalytic residues tend to be located near key positions in enzyme folds. Feature analysis revealed satisfactory performance for our features, which were combined with several conventional features in a prediction model for catalytic residues using a comprehensive data set from the Catalytic Site Atlas. Values of 88.6 for sensitivity and 88.4 for specificity were obtained by 10-fold cross-validation. These results suggest that these features reveal the mutual dependence of residues and are promising for further study of structure-function relationship.
Portable Health Algorithms Test System
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Prediction of machining induced residual stresses
Pramod, Monangi; Reddy, Yarkareddy Gopi; Prakash Marimuthu, K.
2017-07-01
Whenever a component is machined, residual stresses are induced in it. These residual stresses induced in the component reduce its fatigue life, corrosion resistance and wear resistance. Thus it is important to predict and control the machining-induced residual stress. A lot of research is being carried out in this area in the past decade. This paper aims at prediction of residual stresses during machining of Ti-6Al-4V. A model was developed and under various combinations of cutting conditions such as, speed, feed and depth of cut, the behavior of residual stresses were simulated using Finite Element Model. The present work deals with the development of thermo-mechanical model to predict the machining induced residual stresses in Titanium alloy. The simulation results are compared with the published results. The results are in good agreement with the published results. Future work involves optimization or the cutting parameters that effect the machining induced residual stresses. The results obtained were validated with previous work.
Learning from nature: Nature-inspired algorithms
DEFF Research Database (Denmark)
Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin
2016-01-01
During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...
Residual stress field of ballised holes
International Nuclear Information System (INIS)
Lai, Man On; He, Zhimin
2012-01-01
Ballising, involving pushing a slightly over-sized ball made of hard material through a hole, is a kind of cold working process. Applying ballising process to fastener holes produces compressive residual stress on the edge of the holes, and therefore increases the fatigue life of the components or structures. Quantification of the residual stress field is critical to define and precede the ballising process. In this article, the ballised holes are modeled as cold-expanded holes. Elastic-perfectly plastic theory is employed to analyze the holes with cold expansion process. For theoretical simplification, an axially symmetrical thin plate with a cold expanded hole is assumed. The elasticplastic boundaries and residual stress distribution surrounding the cold expanded hole are derived. With the analysis, the residual stress field can be obtained together with actual cold expansion process in which only the diameters of hole before and after cold expansion need to be measured. As it is a non-destructive method, it provides a convenient way to estimate the elastic-plastic boundaries and residual stresses of cold worked holes. The approach is later extended to the case involving two cold-worked holes. A ballised hole is looked upon as a cold expanded hole and therefore is investigated by the approach. Specimens ballised with different interference levels are investigated. The effects of interference levels and specimen size on residual stresses are studied. The overall residual stresses of plates with two ballised holes are obtained by superposing the residual stresses induced on a single ballised hole. The effects of distance between the centers of the two holes with different interference levels on the residual stress field are revealed
Complex networks an algorithmic perspective
Erciyes, Kayhan
2014-01-01
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Instance-specific algorithm configuration
Malitsky, Yuri
2014-01-01
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Algorithms Design Techniques and Analysis
Alsuwaiyel, M H
1999-01-01
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi
Subcubic Control Flow Analysis Algorithms
DEFF Research Database (Denmark)
Midtgaard, Jan; Van Horn, David
We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...
Guidelines for selection and presentation of residue values of pesticides
Velde-Koerts T van der; Hoeven-Arentzen PH van; Ossendorp BC; RIVM-SIR
2004-01-01
Pesticide residue assessments are executed to establish legal limits, called Maximum Residue Limits (MRLs). MRLs are derived from the results of these pesticide residue trials, which are performed according to critical Good Agricultural Practice. Only one residue value per residue trial may be
RESIDUAL RISK ASSESSMENT: GAS DISTRIBUTION STAGE ...
This document describes the residual risk assessment for the Gas Distribution Stage 1 souce category. For stationary sources, section 112 (f) of the Clean Air Act requires EPA to assess risks to human health and the environment following implementation of technology-based control standards. If these technology-based control standards do not provide an ample margin of safety, then EPA is required to promulgate addtional standards. This document describes the methodology and results of the residual risk assessment performed for the Gas Distribution Stage 1 source category. The results of this analyiss will assist EPA in determining whether a residual risk rule for this source category is appropriate.
Properties of residuals for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Møller, Jesper; Pakes, A. G.
2008-01-01
For any point process in Rd that has a Papangelou conditional intensity λ, we define a random measure of ‘innovations' which has mean zero. When the point process model parameters are estimated from data, there is an analogous random measure of ‘residuals'. We analyse properties of the innovation...... and residuals, including first and second moments, conditional independence, a martingale property, and lack of correlation. Some large sample asymptotics are studied. We derive the marginal distribution of smoothed residuals by solving a distributional equivalence....
Multilevel acceleration of scattering-source iterations with application to electron transport
Directory of Open Access Journals (Sweden)
Clif Drumm
2017-09-01
Full Text Available Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA algorithm that uses a low-order discrete-ordinates (SN or spherical-harmonics (PN solve to accelerate convergence of a high-order SN source-iteration (SI solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergence of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.
Video compressed sensing using iterative self-similarity modeling and residual reconstruction
Kim, Yookyung; Oh, Han; Bilgin, Ali
2013-04-01
Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.
Lin, Yufei; Yang, Zengling; Liang, Hao; Li, Shouxue; Fan, Xia; Xiao, Zhiming
2018-03-12
Antibiotic mycelial residues (AMRs) added to animal feeds easily lead to drug resistance that affects human health and environment. However, there is a lack of effective detection methods, especially a fast and convenient detection technology, to distinguish AMRs from other components in animal feeds. To develop effective detection methods, two types of global Mahalanobis distance (GH) algorithms based on near-infrared microscopy (NIRM) imaging are proposed. The aim of this study is to investigate the feasibility of using NIRM imaging to identify AMRs in soybean meals. We prepared 15 mixed samples containing 5% AMRs using three types of soybean meals and four types of AMRs. The GH algorithm was used to identify non-soybean meals among the mixed samples. The hierarchical cluster analysis was employed to verify the recognition accuracy. The results indicate that use of the GH algorithm could identify soybean meals with AMR at a level as low as 5%.
Keppel, Theodore R.; Weis, David D.
2015-04-01
Measurement of residual structure in intrinsically disordered proteins can provide insights into the mechanisms by which such proteins undergo coupled binding and folding. The present work describes an approach to measure residual structure in disordered proteins using millisecond hydrogen/deuterium (H/D) exchange in a conventional bottom-up peptide-based workflow. We used the exchange mid-point, relative to a totally deuterated control, to quantify the rate of H/D exchange in each peptide. A weighted residue-by-residue average of these midpoints was used to map the extent of residual structure at near single-residue resolution. We validated this approach both by simulating a disordered protein and experimentally using the p300 binding domain of ACTR, a model disordered protein already well-characterized by other approaches. Secondary structure elements mapped in the present work are in good agreement with prior nuclear magnetic resonance measurements. The new approach was somewhat limited by a loss of spatial resolution and subject to artifacts because of heterogeneities in intrinsic exchange. Approaches to correct these limitations are discussed.
Sonnberg, Saskia; Armenta, Sergio; Garrigues, Salvador; de la Guardia, Miguel
2015-08-01
Ion-mobility spectroscopy (IMS) was evaluated as a high-throughput, cheap, and efficient analytical tool for detecting residues of tetrahydrocannabinol (THC) on hands. Regarding the usefulness of hand residues as potential samples for determining THC handling and abuse, we studied the correlation between data obtained from cannabis consumers who were classified as positive after saliva analysis and from those who were classified as positive on the basis of the information from hand-residue analysis. Sampling consisted of wiping the hands with borosilicate glass microfiber filters and introducing these directly into the IMS after thermal desorption. The possibility of false positive responses, resulting from the presence of other compounds with a similar drift time to THC, was evaluated and minimised by applying the truncated negative second-derivative algorithm. The possibility of false negative responses, mainly caused by competitive ionisation resulting from nicotine, was also studied. Graphical abstract THC residues: from hands to analytical signals.
Adaptive Maneuvering Target Tracking Algorithm
Directory of Open Access Journals (Sweden)
Chunling Wu
2014-07-01
Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Designing algorithms using CAD technologies
Directory of Open Access Journals (Sweden)
Alin IORDACHE
2008-01-01
Full Text Available A representative example of eLearning-platform modular application, Ã¢Â€Â˜Logical diagramsÃ¢Â€Â™, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.
A quantum causal discovery algorithm
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Multiagent scheduling models and algorithms
Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur
2014-01-01
This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.
Efficient Algorithms for Subgraph Listing
Directory of Open Access Journals (Sweden)
Niklas Zechner
2014-05-01
Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Autonomous algorithms for image restoration
Griniasty, Meir
1994-01-01
We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.
New algorithms for parallel MRI
International Nuclear Information System (INIS)
Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A
2008-01-01
Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.
When the greedy algorithm fails
Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders
2004-01-01
We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...
A* Algorithm for Graphics Processors
Inam, Rafia; Cederman, Daniel; Tsigas, Philippas
2010-01-01
Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...
Algorithm for programming function generators
International Nuclear Information System (INIS)
Bozoki, E.
1981-01-01
The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described
Efficient identification of critical residues based only on protein structure by network analysis.
Directory of Open Access Journals (Sweden)
Michael P Cusack
2007-05-01
Full Text Available Despite the increasing number of published protein structures, and the fact that each protein's function relies on its three-dimensional structure, there is limited access to automatic programs used for the identification of critical residues from the protein structure, compared with those based on protein sequence. Here we present a new algorithm based on network analysis applied exclusively on protein structures to identify critical residues. Our results show that this method identifies critical residues for protein function with high reliability and improves automatic sequence-based approaches and previous network-based approaches. The reliability of the method depends on the conformational diversity screened for the protein of interest. We have designed a web site to give access to this software at http://bis.ifc.unam.mx/jamming/. In summary, a new method is presented that relates critical residues for protein function with the most traversed residues in networks derived from protein structures. A unique feature of the method is the inclusion of the conformational diversity of proteins in the prediction, thus reproducing a basic feature of the structure/function relationship of proteins.
Cascade Error Projection: A New Learning Algorithm
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Efficient particle filtering through residual nudging
Luo, Xiaodong
2013-05-15
We introduce an auxiliary technique, called residual nudging, to the particle filter to enhance its performance in cases where it performs poorly. The main idea of residual nudging is to monitor and, if necessary, adjust the residual norm of a state estimate in the observation space so that it does not exceed a pre-specified threshold. We suggest a rule to choose the pre-specified threshold, and construct a state estimate accordingly to achieve this objective. Numerical experiments suggest that introducing residual nudging to a particle filter may (substantially) improve its performance, in terms of filter accuracy and/or stability against divergence, especially when the particle filter is implemented with a relatively small number of particles. © 2013 Royal Meteorological Society.
Surgical treatment for residual or recurrent strabismus
Directory of Open Access Journals (Sweden)
Tao Wang
2014-12-01
Full Text Available Although the surgical treatment is a relatively effective and predictable method for correcting residual or recurrent strabismus, such as posterior fixation sutures, medial rectus marginal myotomy, unilateral or bilateral rectus re-recession and resection, unilateral lateral rectus recession and adjustable suture, no standard protocol is established for the surgical style. Different surgical approaches have been recommended for correcting residual or recurrent strabismus. The choice of the surgical procedure depends on the former operation pattern and the surgical dosages applied on the patients, residual or recurrent angle of deviation and the operator''s preference and experience. This review attempts to outline recent publications and current opinion in the management of residual or recurrent esotropia and exotropia.
Earthworm tolerance to residual agricultural pesticide contamination
DEFF Research Database (Denmark)
Givaudan, Nicolas; Binet, Françoise; Le Bot, Barbara
2014-01-01
This study investigates if acclimatization to residual pesticide contamination in agricultural soils is reflected in detoxification, antioxidant enzyme activities and energy budget of earthworms. Five fields within a joint agricultural area exhibited different chemical and farming histories from...
Cyolane residues in milk of lactating goats
International Nuclear Information System (INIS)
Zayed, S.M.A.D.; Osman, A.; Fakhr, I.M.I.
1981-01-01
Consecutive feeding of lactating goats with 14 C-alkyl labelled cyolane for 5 days at dietary levels 8 and 16 ppm resulted in the appearance of measurable insecticide residues in milk (0.02-0.04 mg/kg). The residue levels were markedly reduced after a withdrawal period of 7 days. Analysis of urine and milk residues showed the presence of similar metabolites in addition to the parent compound. The major part of the residue consisted of mono-, diethyl phosphate and 2 hydrophilic unknown metabolites. The erythrocyte cholinesterase activity was reduced to about 50% after 24 hours whereas the plasma enzyme was only slightly affected. The animals remained symptom-free during the experimental period. (author)
On the residual properties of damaged FRC
Zerbino, R.; Torrijos, M. C.; Giaccio, G.
2017-09-01
A discussion on the residual behaviour of Fibre Reinforced Concrete (FRC) is performed based on two selected cases of concrete degradation: the exposure at High Temperatures and the development of Alkali Silica Reactions. In addition, and taking in mind that the failure mechanism in FRC is strongly related with the fibre pull-out strength, the bond strength in damaged matrices was shown concluding that the residual bond strength is less affected than the matrix strength. As the damage increases, the compressive strength and the modulus of elasticity decrease, being the modulus of elasticity the most affected. There were no significant changes produced by the incorporation of fibres on the residual behaviour when compared with previous experience on plain damage concrete. Regarding the tensile behaviour although the first peak decreases as the damage increases, even for a severely damage FRC the residual stresses remain almost unaffected.
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.
Rotational Invariant Dimensionality Reduction Algorithms.
Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David
2017-11-01
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.
Artificial Flora (AF Optimization Algorithm
Directory of Open Access Journals (Sweden)
Long Cheng
2018-02-01
Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.
Neutron diffraction residual strain / stress measurements
International Nuclear Information System (INIS)
Paradowska, Anna
2012-01-01
Residual stresses affect mechancial properties of materials and prodcuts, it is essential to estimate them practically in order to esatblish acceptable limits. Knowledge of the development of residual stresses in components at the various production stages- extrusion, rolling, machining, welding and heat treating-can be used to imporve product reliability and performance. This short article gives an example relevant to the power industry using ANSTO's 'Kowari' neutron strain scanner.
Nitrogen mineralization from organic residues: research opportunities.
Cabrera, M L; Kissel, D E; Vigil, M F
2005-01-01
Research on nitrogen (N) mineralization from organic residues is important to understand N cycling in soils. Here we review research on factors controlling net N mineralization as well as research on laboratory and field modeling efforts, with the objective of highlighting areas with opportunities for additional research. Among the factors controlling net N mineralization are organic composition of the residue, soil temperature and water content, drying and rewetting events, and soil characteristics. Because C to N ratio of the residue cannot explain all the variability observed in N mineralization among residues, considerable effort has been dedicated to the identification of specific compounds that play critical roles in N mineralization. Spectroscopic techniques are promising tools to further identify these compounds. Many studies have evaluated the effect of temperature and soil water content on N mineralization, but most have concentrated on mineralization from soil organic matter, not from organic residues. Additional work should be conducted with different organic residues, paying particular attention to the interaction between soil temperature and water content. One- and two-pool exponential models have been used to model N mineralization under laboratory conditions, but some drawbacks make it difficult to identify definite pools of mineralizable N. Fixing rate constants has been used as a way to eliminate some of these drawbacks when modeling N mineralization from soil organic matter, and may be useful for modeling N mineralization from organic residues. Additional work with more complex simulation models is needed to simulate both gross N mineralization and immobilization to better estimate net N mineralized from organic residues.
Disposal of radioactive residuals requires careful planning
International Nuclear Information System (INIS)
Pontius, F.W.
1994-01-01
Radionuclides removed from source waters during water treatment become concentrated in residual liquids and sludges. Treatment technologies used to remove these contaminants from source waters may generate wastes that contain substantial radioactivity. Water systems that install one or more of these technologies in order to comply with the maximum contaminant levels (MCLs) eventually adopted must dispose of the residuals. Disposal of radionuclide-containing wastes can be especially difficult, depending on the nature and amount of radioactivity present
Plutonium fuel fabrication residues and wastes
International Nuclear Information System (INIS)
Arnal, T.; Cousinou, G.; Desille, H.
1982-04-01
This paper discusses the current situation in the fabrication plant at Cadarache with an annual plutonium throughput of several tons. Three major fabrication byproduct categories are defined in this plant: 1) scraps, directly recycled at the fabrication input station; 2) residues, byproducts recycled by chemical processes, or processed in washing and incineration stations; 3) wastes, placed in drums and evacuated directly to a waste conditioning station. The borderline between residues and wastes has yet to be precisely determined
Protein structure based prediction of catalytic residues
2013-01-01
Background Worldwide structural genomics projects continue to release new protein structures at an unprecedented pace, so far nearly 6000, but only about 60% of these proteins have any sort of functional annotation. Results We explored a range of features that can be used for the prediction of functional residues given a known three-dimensional structure. These features include various centrality measures of nodes in graphs of interacting residues: closeness, betweenness and page-rank centrality. We also analyzed the distance of functional amino acids to the general center of mass (GCM) of the structure, relative solvent accessibility (RSA), and the use of relative entropy as a measure of sequence conservation. From the selected features, neural networks were trained to identify catalytic residues. We found that using distance to the GCM together with amino acid type provide a good discriminant function, when combined independently with sequence conservation. Using an independent test set of 29 annotated protein structures, the method returned 411 of the initial 9262 residues as the most likely to be involved in function. The output 411 residues contain 70 of the annotated 111 catalytic residues. This represents an approximately 14-fold enrichment of catalytic residues on the entire input set (corresponding to a sensitivity of 63% and a precision of 17%), a performance competitive with that of other state-of-the-art methods. Conclusions We found that several of the graph based measures utilize the same underlying feature of protein structures, which can be simply and more effectively captured with the distance to GCM definition. This also has the added the advantage of simplicity and easy implementation. Meanwhile sequence conservation remains by far the most influential feature in identifying functional residues. We also found that due the rapid changes in size and composition of sequence databases, conservation calculations must be recalibrated for specific
Residual stress measurement at Budapest Neutron Center
International Nuclear Information System (INIS)
Gyula, T.
2005-01-01
The use of residual stress measurements of different construction element and recent possibilities of Budapest Neutron Centre are presented. The details investigated already: gas turbine wheel, axial compressor blade, turbine blade and plastically deformed stainless steel. We demonstrated the use of a neutron scattering (SANS, residual stress, diffraction) for the materials behavior investigation in order to analyze the processes going on under the different mechanical loading. The direction of possible instrumental development is presented. (author)
Ensemble Kalman filtering with residual nudging
Luo, X.
2012-10-03
Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Validation of welded joint residual stress simulation
International Nuclear Information System (INIS)
Computational mechanics is being increasingly applied to predict the state of residual stress in welded joints for nuclear power plant applications. Motives for undertaking such calculations include optimising the design of welded joints and weld procedures, assessing the effectiveness of mitigation processes, providing more realistic inputs to structural integrity assessments and underwriting safety cases for operating nuclear power plant. Fusion welding processes involve intense localised heating to melt the surfaces to be joined and introduction of molten weld filler metal. A complex residual stress field develops at the weld through solidification, differential thermal contraction, cyclic thermal plasticity, phase transformation and chemical diffusion processes. The calculation of weld residual stress involves detailed non-linear analyses where many assumptions and approximations have to be made. In consequence, the accuracy and reliability of solutions can be highly variable. This paper illustrates the degree of variability that can arise in weld residual stress simulation results and summarises the new R6 guidelines which aim to improve the reliability and accuracy of computational predictions. The requirements for validating weld simulations are reviewed where residual stresses are to be used in fracture mechanics analysis. This includes a discussion of how to obtain and interpret measurements from mock-ups, benchmark weldments and published data. Benchmark weldments are described that illustrate some of the issues and show how validation of numerical prediction of weld residual stress can be achieved. Finally, plans for developing the weld modelling guidelines and associated benchmarks are outlined
Residual Stresses in Thermoplastic Composites: A Review
Directory of Open Access Journals (Sweden)
M.M. Shokrieh
2008-12-01
Full Text Available Applications of thermoplastic composites have developed extensively. The thermoplastic composites in comparison with the thermoset composites have many advantages. Thermoplastic composites can be melted and remolded many times. The duration of manufacturing process of these composites is short, producing very tough material, and the welding ability and multiple recyclings are their further advantages. The lack of knowledge in this group of composites is the main obstacle in their development. In this review the research works in the field of residual stresses in thermoplastic composites is presented. First, a literature survey on the available research on residual stresses on thermoplastics and thermoplastic composites reinforced with short fibers is compiled. Moreover a review on the available research on residual stresses on thermoplastic composites reinforced with long fibers is presented as well. The effects of the residual stresses on these composites are discussed. Experimental techniques for the measurement of residual stresses in thermoplastic composites and the methods for reducing the existing residual stresses are studied.
Ensemble Kalman filtering with residual nudging
Directory of Open Access Journals (Sweden)
Xiaodong Luo
2012-10-01
Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Residu Fungisida Tembaga (Cu pada Pucuk Teh
Directory of Open Access Journals (Sweden)
Christanti Sumardiyono
1996-12-01
Full Text Available The study was done to know copper residue on tea due to blister blight control by copper fungicides. The experiment was done at Pagilaran Tea Plantation, Batang, Pekalongan. Tea plants were sprayed 8 times, with 8 days interval at the dosages of 0. 75, 150, and 300 g/ha respectively. Shoot sample was taken at 8 and 16 days after spraying. The copper residue war analyzed by Atomic Adsorbtion Spectrophotometer at 324 nm. The result showed that the higher dosage of spraying gives higher copper residue. At the dosage of 300 g/ha was detected 23,52 ppm of copper residue at 8 days after spraying. The residue was reduced to 12,96 ppm at 16 days after spraying. At that dosage the blister blight disease intensity reduced by 59,97%. The detected residue of copper fungicides due to blister blight control is not higher than MRL ( 150 ppm.
Method for residual household waste composition studies.
Sahimaa, Olli; Hupponen, Mari; Horttanainen, Mika; Sorvari, Jaana
2015-12-01
The rising awareness of decreasing natural resources has brought forward the idea of a circular economy and resource efficiency in Europe. As a part of this movement, European countries have identified the need to monitor residual waste flows in order to make recycling more efficient. In Finland, studies on the composition of residual household waste have mostly been conducted using different methods, which makes the comparison of the results difficult. The aim of this study was to develop a reliable method for residual household waste composition studies. First, a literature review on European study methods was performed. Also, 19 Finnish waste composition studies were compared in order to identify the shortcomings of the current Finnish residual household waste composition data. Moreover, the information needs of different waste management authorities concerning residual household waste were studied through a survey and personal interviews. Stratification, sampling, the classification of fractions and statistical analysis were identified as the key factors in a residual household waste composition study. The area studied should be divided into non-overlapping strata in order to decrease the heterogeneity of waste and enable comparisons between different waste producers. A minimum of six subsamples, each 100 kg, from each stratum should be sorted. Confidence intervals for each waste category should be determined in order to evaluate the applicability of the results. A new three-level classification system was created based on Finnish stakeholders' information needs and compared to four other European waste composition study classifications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Detecting organic gunpowder residues from handgun use
MacCrehan, William A.; Ricketts, K. Michelle; Baltzersen, Richard A.; Rowe, Walter F.
1999-02-01
The gunpowder residues that remain after the use of handguns or improvised explosive devices pose a challenge for the forensic investigator. Can these residues be reliably linked to a specific gunpowder or ammunition? We investigated the possibility by recovering and measuring the composition of organic additives in smokeless powder and its post-firing residues. By determining gunpowder additives such as nitroglycerin, dinitrotoluene, ethyl- and methylcentralite, and diphenylamine, we hope to identify the type of gunpowder in the residues and perhaps to provide evidence of a match to a sample of unfired powder. The gunpowder additives were extracted using an automated technique, pressurized fluid extraction (PFE). The conditions for the quantitative extraction of the additives using neat and solvent-modified supercritical carbon dioxide were investigated. All of the major gunpowder additives can be determined with baseline resolution using capillary electrophoresis (CE) with a micellar agent and UV absorbance detection. A study of candidate internal standards for use in the CE method is also presented. The PFE/CE technique is used to evaluate a new residue sampling protocol--asking shooters to blow their noses. In addition, an initial investigation of the compositional differences among unfired and post-fired .22 handgun residues is presented.
Efficient predictive algorithms for image compression
Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla
2017-01-01
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...
Algebraic Algorithm Design and Local Search
National Research Council Canada - National Science Library
Graham, Robert
1996-01-01
.... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...
Golden Sine Algorithm: A Novel Math-Inspired Algorithm
Directory of Open Access Journals (Sweden)
TANYILDIZI, E.
2017-05-01
Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.
Assessing the Availability of Wood Residues and Residue Markets in Virginia
Alderman, Delton R. Jr.
1998-01-01
A statewide mail survey of primary and secondary wood product manufacturers was undertaken to quantify the production and consumption of wood residues in Virginia. Two hundred and sixty-six wood product manufacturers responded to the study and they provided information on the production, consumption, markets, income or disposal costs, and disposal methods of wood residues. Hardwood and pine sawmills produce approximately 66 percent of Virginia's wood residues. Virginia's wood product man...
International Nuclear Information System (INIS)
Langmead, Christopher James; Donald, Bruce Randall
2004-01-01
We report an automated procedure for high-throughput NMR resonance assignment for a protein of known structure, or of an homologous structure. Our algorithm performs Nuclear Vector Replacement (NVR) by Expectation/Maximization (EM) to compute assignments. NVR correlates experimentally-measured NH residual dipolar couplings (RDCs) and chemical shifts to a given a priori whole-protein 3D structural model. The algorithm requires only uniform 15 N-labelling of the protein, and processes unassigned H N - 15 N HSQC spectra, H N - 15 N RDCs, and sparse H N -H N NOE's (d NN s). NVR runs in minutes and efficiently assigns the (H N , 15 N) backbone resonances as well as the sparse d NN s from the 3D 15 N-NOESY spectrum, in O(n 3 ) time. The algorithm is demonstrated on NMR data from a 76-residue protein, human ubiquitin, matched to four structures, including one mutant (homolog), determined either by X-ray crystallography or by different NMR experiments (without RDCs). NVR achieves an average assignment accuracy of over 99%. We further demonstrate the feasibility of our algorithm for different and larger proteins, using different combinations of real and simulated NMR data for hen lysozyme (129 residues) and streptococcal protein G (56 residues), matched to a variety of 3D structural models. Abbreviations: NMR, nuclear magnetic resonance; NVR, nuclear vector replacement; RDC, residual dipolar coupling; 3D, three-dimensional; HSQC, heteronuclear single-quantum coherence; H N , amide proton; NOE, nuclear Overhauser effect; NOESY, nuclear Overhauser effect spectroscopy; d NN , nuclear Overhauser effect between two amide protons; MR, molecular replacement; SAR, structure activity relation; DOF, degrees of freedom; nt., nucleotides; SPG, Streptococcal protein G; SO(3), special orthogonal (rotation) group in 3D; EM, Expectation/Maximization; SVD, singular value decomposition
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Validation of the interface-GMRES(R) solution method for fluid-structure interactions
Michler, C.; Van Brummelen, E.H.; In 't Groen, R.; De Borst, R.
2006-01-01
The numerical solution of fluid-structure interactions with the customary subiteration method incurs numerous deficiencies. We validate a recently proposed solution method based on the conjugation of subiteration with a Newton-Krylov method, and demonstrate its superiority and beneficial
A survey of residual analysis and a new test of residual trend.
McDowell, J J; Calvin, Olivia L; Klapes, Bryan
2016-05-01
A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test. © 2016 Society for the Experimental Analysis of Behavior.
Ye, Kai; Feenstra, K Anton; Heringa, Jaap; Ijzerman, Adriaan P; Marchiori, Elena
2008-01-01
Identification of residues that account for protein function specificity is crucial, not only for understanding the nature of functional specificity, but also for protein engineering experiments aimed at switching the specificity of an enzyme, regulator or transporter. Available algorithms generally use multiple sequence alignments to identify residue positions conserved within subfamilies but divergent in between. However, many biological examples show a much subtler picture than simple intra-group conservation versus inter-group divergence. We present multi-RELIEF, a novel approach for identifying specificity residues that is based on RELIEF, a state-of-the-art Machine-Learning technique for feature weighting. It estimates the expected 'local' functional specificity of residues from an alignment divided in multiple classes. Optionally, 3D structure information is exploited by increasing the weight of residues that have high-weight neighbors. Using ROC curves over a large body of experimental reference data, we show that (a) multi-RELIEF identifies specificity residues for the seven test sets used, (b) incorporating structural information improves prediction for specificity of interaction with small molecules and (c) comparison of multi-RELIEF with four other state-of-the-art algorithms indicates its robustness and best overall performance. A web-server implementation of multi-RELIEF is available at www.ibi.vu.nl/programs/multirelief. Matlab source code of the algorithm and data sets are available on request for academic use.
A review on quantum search algorithms
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
Natural selection and algorithmic design of mRNA.
Cohen, Barry; Skiena, Steven
2003-01-01
Messenger RNA (mRNA) sequences serve as templates for proteins according to the triplet code, in which each of the 4(3) = 64 different codons (sequences of three consecutive nucleotide bases) in RNA either terminate transcription or map to one of the 20 different amino acids (or residues) which build up proteins. Because there are more codons than residues, there is inherent redundancy in the coding. Certain residues (e.g., tryptophan) have only a single corresponding codon, while other residues (e.g., arginine) have as many as six corresponding codons. This freedom implies that the number of possible RNA sequences coding for a given protein grows exponentially in the length of the protein. Thus nature has wide latitude to select among mRNA sequences which are informationally equivalent, but structurally and energetically divergent. In this paper, we explore how nature takes advantage of this freedom and how to algorithmically design structures more energetically favorable than have been built through natural selection. In particular: (1) Natural Selection--we perform the first large-scale computational experiment comparing the stability of mRNA sequences from a variety of organisms to random synonymous sequences which respect the codon preferences of the organism. This experiment was conducted on over 27,000 sequences from 34 microbial species with 36 genomic structures. We provide evidence that in all genomic structures highly stable sequences are disproportionately abundant, and in 19 of 36 cases highly unstable sequences are disproportionately abundant. This suggests that the stability of mRNA sequences is subject to natural selection. (2) Artificial Selection--motivated by these biological results, we examine the algorithmic problem of designing the most stable and unstable mRNA sequences which code for a target protein. We give a polynomial-time dynamic programming solution to the most stable sequence problem (MSSP), which is asymptotically no more complex
Sustainable System for Residual Hazards Management
International Nuclear Information System (INIS)
Kevin M. Kostelnik; James H. Clarke; Jerry L. Harbour
2004-01-01
Hazardous, radioactive and other toxic substances have routinely been generated and subsequently disposed of in the shallow subsurface throughout the world. Many of today's waste management techniques do not eliminate the problem, but rather only concentrate or contain the hazardous contaminants. Residual hazards result from the presence of hazardous and/or contaminated material that remains on-site following active operations or the completion of remedial actions. Residual hazards pose continued risk to humans and the environment and represent a significant and chronic problem that require continuous long-term management (i.e. >1000 years). To protect human health and safeguard the natural environment, a sustainable system is required for the proper management of residual hazards. A sustainable system for the management of residual hazards will require the integration of engineered, institutional and land-use controls to isolate residual contaminants and thus minimize the associated hazards. Engineered controls are physical modifications to the natural setting and ecosystem, including the site, facility, and/or the residual materials themselves, in order to reduce or eliminate the potential for exposure to contaminants of concern (COCs). Institutional controls are processes, instruments, and mechanisms designed to influence human behavior and activity. System failure can involve hazardous material escaping from the confinement because of system degradation (i.e., chronic or acute degradation) or by external intrusion of the biosphere into the contaminated material because of the loss of institutional control. An ongoing analysis of contemporary and historic sites suggests that the significance of the loss of institutional controls is a critical pathway because decisions made during the operations/remedial action phase, as well as decisions made throughout the residual hazards management period, are key to the long-term success of the prescribed system. In fact
An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks
Directory of Open Access Journals (Sweden)
Chakradhar Penumalli
2015-01-01
Full Text Available A new energy efficient optimal Connected Dominating Set (CDS algorithm with activity scheduling for mobile ad hoc networks (MANETs is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node’s remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node’s mobility and residual energy (RE are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results.
Adaptive Weighted Morphology Detection Algorithm of Plane Object in Docking Guidance System
Directory of Open Access Journals (Sweden)
Guo yan-ying
2010-09-01
Full Text Available In this paper, we presented an image segmentation algorithm based on adaptive weighted mathematical morphology edge detectors. The performance of the proposed algorithm has been demonstrated on the Lena image. The input of the proposed algorithm is a grey level image. The image was first processed by the mathematical morphological closing and dilation residue edge detector to enhance the edge features and sketch out the contour of the image, respectively. Then the adaptive weight SE operation was applied to the edge-extracted image to fuse edge gaps and hill up holds. Experimental results show it can not only primely extract detail edge, but also superbly preserve integer effect comparative to classical edge detection algorithm.
An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks.
Penumalli, Chakradhar; Palanichamy, Yogesh
2015-01-01
A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results.
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.
SDR Input Power Estimation Algorithms
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Computational geometry algorithms and applications
de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried
1997-01-01
Computational geometry emerged from the field of algorithms design and anal ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...
Pesticide residues in birds and mammals
Stickel, L.F.; Edwards, C.A.
1973-01-01
SUMMARY: Residues of organochlorine pesticides and their breakdown products are present in the tissues of essentially all wild birds throughout the world. These chemicals accumulate in fat from a relatively small environmental exposure. DDE and dieldrin are most prevalent. Others, such as heptachlor epoxide, chlordane, endrin, and benzene hexachloride also occur, the quantities and kinds generally reflecting local or regional use. Accumulation may be sufficient to kill animals following applications for pest control. This has occurred in several large-scale programmes in the United States. Mortality has also resulted from unintentional leakage of chemical from commercial establishments. Residues may persist in the environment for many years, exposing successive generations of animals. In general, birds that eat other birds, or fish, have higher residues than those that eat seeds and vegetation. The kinetic processes of absorption, metabolism, storage, and output differ according to both kind of chemical and species of animal. When exposure is low and continuous, a balance between intake and excretion may be achieved. Residues reach a balance at an approximate animal body equilibrium or plateau; the storage is generally proportional to dose. Experiments with chickens show that dieldrin and heptachlor epoxide have the greatest propensity for storage, endrin next, then DDT, then lindane. The storage of DDT was complicated by its metabolism to DDE and DDD, but other studies show that DDE has a much greater propensity for storage than either DDD or DDT. Methoxychlor has little cumulative capacity in birds. Residues in eggs reflect and parallel those in the parent bird during accumulation, equilibrium, and decline when dosage is discontinued. Residues with the greatest propensity for storage are also lost most slowly. Rate of loss of residues can be modified by dietary components and is speeded by weight loss of the animal. Under sublethal conditions of continuous
An efficient preconditioning technique using Krylov subspace methods for 3D characteristics solvers
International Nuclear Information System (INIS)
Dahmani, M.; Le Tellier, R.; Roy, R.; Hebert, A.
2005-01-01
The Generalized Minimal RESidual (GMRES) method, using a Krylov subspace projection, is adapted and implemented to accelerate a 3D iterative transport solver based on the characteristics method. Another acceleration technique called the self-collision rebalancing technique (SCR) can also be used to accelerate the solution or as a left preconditioner for GMRES. The GMRES method is usually used to solve a linear algebraic system (Ax=b). It uses K(r (o) ,A) as projection subspace and AK(r (o) ,A) for the orthogonalization of the residual. This paper compares the performance of these two combined methods on various problems. To implement the GMRES iterative method, the characteristics equations are derived in linear algebra formalism by using the equivalence between the method of characteristics and the method of collision probability to end up with a linear algebraic system involving fluxes and currents. Numerical results show good performance of the GMRES technique especially for the cases presenting large material heterogeneity with a scattering ratio close to 1. Similarly, the SCR preconditioning slightly increases the GMRES efficiency
Universal algorithm of time sharing
International Nuclear Information System (INIS)
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Scalable algorithms for contact problems
Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít
2016-01-01
This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...
Algorithms and Public Service Media
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Hutchinson, Jonathon
2018-01-01
When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator...... and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a practical level, the introduction of the systems shifts power within the organisations and changes...... the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....
Quantum walks and search algorithms
Portugal, Renato
2013-01-01
This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...
Algorithms for Decision Tree Construction
Chikalov, Igor
2011-01-01
The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Next Generation Suspension Dynamics Algorithms
Energy Technology Data Exchange (ETDEWEB)
Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-12-01
This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.
Fault Tolerant External Memory Algorithms
DEFF Research Database (Denmark)
Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas
2009-01-01
Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...
New applications of partial residual methodology
International Nuclear Information System (INIS)
Uslu, V.R.
1999-12-01
The formulation of a problem of interest in the framework of a statistical analysis starts with collecting the data, choosing a model, making certain assumptions as described in the basic paradigm by Box (1980). This stage is is called model building. Then the estimation stage is in order by pretending as if the formulation of the problem was true to obtain estimates, to make tests and inferences. In the final stage, called diagnostic checking, checking of whether there are some disagreements between the data and the model fitted is done by using diagnostic measures and diagnostic plots. It is well known that statistical methods perform best under the condition that all assumptions related to the methods are satisfied. However it is true that having the ideal case in practice is very difficult. Diagnostics are therefore becoming important so are diagnostic plots because they provide a immediate assessment. Partial residual plots that are the main interest of the present study are playing the major role among the diagnostic plots in multiple regression analysis. In statistical literature it is admitted that partial residual plots are more useful than ordinary residual plots in detecting outliers, nonconstant variance, and especially discovering curvatures. In this study we consider the partial residual methodology in statistical methods rather than multiple regression. We have shown that for the same purpose as in the multiple regression the use of partial residual plots is possible particularly in autoregressive time series models, transfer function models, linear mixed models and ridge regression. (author)
Crop Residue Biomass Effects on Agricultural Runoff
Directory of Open Access Journals (Sweden)
Damodhara R. Mailapalli
2013-01-01
Full Text Available High residue loads associated with conservation tillage and cover cropping may impede water flow in furrow irrigation and thus decrease the efficiency of water delivery and runoff water quality. In this study, the biomass residue effects on infiltration, runoff, and export of total suspended solids (TSS, dissolved organic carbon (DOC, sediment-associated carbon (TSS-C, and other undesirable constituents such as phosphate (soluble P, nitrate (, and ammonium ( in runoff water from a furrow-irrigated field were studied. Furrow irrigation experiments were conducted in 91 and 274 m long fields, in which the amount of residue in the furrows varied among four treatments. The biomass residue in the furrows increased infiltration, and this affected total load of DOC, TSS, and TSS-C. Net storage of DOC took place in the long but not in the short field because most of the applied water ran off in the short field. Increasing field length decreased TSS and TSS-C losses. Total load of , , and soluble P decreased with increasing distance from the inflow due to infiltration. The concentration and load of P increased with increasing residue biomass in furrows, but no particular trend was observed for and . Overall, the constituents in the runoff decreased with increasing surface cover and field length.
Characterization of bound residues in plants
International Nuclear Information System (INIS)
Stratton, G.D. Jr.; Wheeler, W.B.
1986-01-01
The characterization of unextractable (or 'bound') pesticide residues in plants can be difficult owing to the insoluble nature of the pesticide-plant complex. An unextractable residue can be defined as material derived from the applied pesticide which remains in the plant matrix after exhaustive organic solvent extraction. Experiments with a variety of pesticide classes in plants indicate that the level of unextractable residue varies with the plant species, the pesticide and the exposure time of the plant to the pesticide. Methods used in attempts to release 'bound' residues from solvent-extracted plant tissues include acid hydrolyses, enzymatic treatments and techniques of high-temperature distillation. These methods solubilize or release varying amounts of unextractable material; the amounts depend on the pesticide and on the extent to which the plant fibre is degraded. In experiments using radiolabelled dieldrin (1, 2, 3, 4, 10, 10-hexachloro-6, 7-epoxy-1, 4, 4a, 5, 6, 7, 8, 8a-octahydro-exo-1, 4-endo-5,6-dimethanonaphthalene), carbofuran (2,3-dihydro-2, 2-dimethylbenzofuran-7-yl methylcarbamate) and permethrin ([3-phenoxybenzyl(+-)-3-(2, 2-dichlorovinyl)-2, 2-dimethylcyclopropanecarboxylate]) in radishes, portions of the unextractable material solubilized by the above methods were identified as parent compound and/or closely related metabolites. The bioavailability and toxicological significance of unextractable pesticide residues need to be evaluated. (author)
Reclamation of plutonium from pyrochemical processing residues
International Nuclear Information System (INIS)
Gray, L.W.; Gray, J.H.; Holcomb, H.P.; Chostner, D.F.
1987-04-01
Savannah River Laboratory (SRL), Savannah River Plant (SRP), and Rocky Flats Plant (RFP) have jointly developed a process to recover plutonium from molten salt extraction residues. These NaCl, KCL, and MgCl 2 residues, which are generated in the pyrochemical extraction of 241 Am from aged plutonium metal, contain up to 25 wt % dissolved plutonium and up to 2 wt % americium. The overall objective was to develop a process to convert these residues to a pure plutonium metal product and discardable waste. To meet this objective a combination of pyrochemical and aqueous unit operations was used. The first step was to scrub the salt residue with a molten metal (aluminum and magnesium) to form a heterogeneous ''scrub alloy'' containing nominally 25 wt % plutonium. This unit operation, performed at RFP, effectively separated the actinides from the bulk of the chloride salts. After packaging in aluminum cans, the ''scrub alloy'' was then dissolved in a nitric acid - hydrofluoric acid - mercuric nitrate solution at SRP. Residual chloride was separated from the dissolver solution by precipitation with Hg 2 (NO 3 ) 2 followed by centrifuging. Plutonium was then separated from the aluminum, americium and magnesium using the Purex solvent extraction system. The 241 Am was diverted to the waste tank farm, but could be recovered if desired
Methods of measuring residual stresses in components
International Nuclear Information System (INIS)
Rossini, N.S.; Dassisti, M.; Benyounis, K.Y.; Olabi, A.G.
2012-01-01
Highlights: ► Defining the different methods of measuring residual stresses in manufactured components. ► Comprehensive study on the hole drilling, neutron diffraction and other techniques. ► Evaluating advantage and disadvantage of each method. ► Advising the reader with the appropriate method to use. -- Abstract: Residual stresses occur in many manufactured structures and components. Large number of investigations have been carried out to study this phenomenon and its effect on the mechanical characteristics of these components. Over the years, different methods have been developed to measure residual stress for different types of components in order to obtain reliable assessment. The various specific methods have evolved over several decades and their practical applications have greatly benefited from the development of complementary technologies, notably in material cutting, full-field deformation measurement techniques, numerical methods and computing power. These complementary technologies have stimulated advances not only in measurement accuracy and reliability, but also in range of application; much greater detail in residual stresses measurement is now available. This paper aims to classify the different residual stresses measurement methods and to provide an overview of some of the recent advances in this area to help researchers on selecting their techniques among destructive, semi destructive and non-destructive techniques depends on their application and the availabilities of those techniques. For each method scope, physical limitation, advantages and disadvantages are summarized. In the end this paper indicates some promising directions for future developments.
Rare Earth Element Phases in Bauxite Residue
Directory of Open Access Journals (Sweden)
Johannes Vind
2018-02-01
Full Text Available The purpose of present work was to provide mineralogical insight into the rare earth element (REE phases in bauxite residue to improve REE recovering technologies. Experimental work was performed by electron probe microanalysis with energy dispersive as well as wavelength dispersive spectroscopy and transmission electron microscopy. REEs are found as discrete mineral particles in bauxite residue. Their sizes range from <1 μm to about 40 μm. In bauxite residue, the most abundant REE bearing phases are light REE (LREE ferrotitanates that form a solid solution between the phases with major compositions (REE,Ca,Na(Ti,FeO3 and (Ca,Na(Ti,FeO3. These are secondary phases formed during the Bayer process by an in-situ transformation of the precursor bauxite LREE phases. Compared to natural systems, the indicated solid solution resembles loparite-perovskite series. LREE particles often have a calcium ferrotitanate shell surrounding them that probably hinders their solubility. Minor amount of LREE carbonate and phosphate minerals as well as manganese-associated LREE phases are also present in bauxite residue. Heavy REEs occur in the same form as in bauxites, namely as yttrium phosphates. These results show that the Bayer process has an impact on the initial REE mineralogy contained in bauxite. Bauxite residue as well as selected bauxites are potentially good sources of REEs.
Mobility of organic carbon from incineration residues
International Nuclear Information System (INIS)
Ecke, Holger; Svensson, Malin
2008-01-01
Dissolved organic carbon (DOC) may affect the transport of pollutants from incineration residues when landfilled or used in geotechnical construction. The leaching of dissolved organic carbon (DOC) from municipal solid waste incineration (MSWI) bottom ash and air pollution control residue (APC) from the incineration of waste wood was investigated. Factors affecting the mobility of DOC were studied in a reduced 2 6-1 experimental design. Controlled factors were treatment with ultrasonic radiation, full carbonation (addition of CO 2 until the pH was stable for 2.5 h), liquid-to-solid (L/S) ratio, pH, leaching temperature and time. Full carbonation, pH and the L/S ratio were the main factors controlling the mobility of DOC in the bottom ash. Approximately 60 weight-% of the total organic carbon (TOC) in the bottom ash was available for leaching in aqueous solutions. The L/S ratio and pH mainly controlled the mobilization of DOC from the APC residue. About 93 weight-% of TOC in the APC residue was, however, not mobilized at all, which might be due to a high content of elemental carbon. Using the European standard EN 13 137 for determination of total organic carbon (TOC) in MSWI residues is inappropriate. The results might be biased due to elemental carbon. It is recommended to develop a TOC method distinguishing between organic and elemental carbon
Optimal Path Choice in Railway Passenger Travel Network Based on Residual Train Capacity
Directory of Open Access Journals (Sweden)
Fei Dou
2014-01-01
Full Text Available Passenger’s optimal path choice is one of the prominent research topics in the field of railway passenger transport organization. More and more different train types are available, increasing path choices from departure to destination for travelers are unstoppable. However, travelers cannot avoid being confused when they hope to choose a perfect travel plan based on various travel time and cost constraints before departure. In this study, railway passenger travel network is constructed based on train timetable. Both the generalized cost function we developed and the residual train capacity are considered to be the foundation of path searching procedure. The railway passenger travel network topology is analyzed based on residual train capacity. Considering the total travel time, the total travel cost, and the total number of passengers, we propose an optimal path searching algorithm based on residual train capacity in railway passenger travel network. Finally, the rationale of the railway passenger travel network and the optimal path generation algorithm are verified positively by case study.
Optimal path choice in railway passenger travel network based on residual train capacity.
Dou, Fei; Yan, Kai; Huang, Yakun; Wang, Li; Jia, Limin
2014-01-01
Passenger's optimal path choice is one of the prominent research topics in the field of railway passenger transport organization. More and more different train types are available, increasing path choices from departure to destination for travelers are unstoppable. However, travelers cannot avoid being confused when they hope to choose a perfect travel plan based on various travel time and cost constraints before departure. In this study, railway passenger travel network is constructed based on train timetable. Both the generalized cost function we developed and the residual train capacity are considered to be the foundation of path searching procedure. The railway passenger travel network topology is analyzed based on residual train capacity. Considering the total travel time, the total travel cost, and the total number of passengers, we propose an optimal path searching algorithm based on residual train capacity in railway passenger travel network. Finally, the rationale of the railway passenger travel network and the optimal path generation algorithm are verified positively by case study.
Empirical tests of the Gradual Learning Algorithm
Boersma, P.; Hayes, B.
2001-01-01
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and
A new cluster algorithm for graphs
S. van Dongen
1998-01-01
textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a
Seamless Merging of Hypertext and Algorithm Animation
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
Deterministic algorithms for multi-criteria TSP
Manthey, Bodo; Ogihara, Mitsunori; Tarui, Jun
2011-01-01
We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms. First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
Using Alternative Multiplication Algorithms to "Offload" Cognition
Jazby, Dan; Pearn, Cath
2015-01-01
When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…
AN ALGORITHM FOR AN ALGORITHM FOR THE DESIGN THE ...
African Journals Online (AJOL)
eobe
focuses on the development of an algorithm for designing an axial flow compressor for designing an axial flow compressor for designing an axial flow compressor for a power generation gas turbine generation gas turbine and attempt and attempt and attempts to bring to the public domain some parameters regarded as.
Big Data Mining: Tools & Algorithms
Directory of Open Access Journals (Sweden)
Adeel Shiraz Hashmi
2016-03-01
Full Text Available We are now in Big Data era, and there is a growing demand for tools which can process and analyze it. Big data analytics deals with extracting valuable information from that complex data which can’t be handled by traditional data mining tools. This paper surveys the available tools which can handle large volumes of data as well as evolving data streams. The data mining tools and algorithms which can handle big data have also been summarized, and one of the tools has been used for mining of large datasets using distributed algorithms.
CATEGORIES OF COMPUTER SYSTEMS ALGORITHMS
Directory of Open Access Journals (Sweden)
A. V. Poltavskiy
2015-01-01
Full Text Available Philosophy as a frame of reference on world around and as the first science is a fundamental basis, "roots" (R. Descartes for all branches of the scientific knowledge accumulated and applied in all fields of activity of a human being person. The theory of algorithms as one of the fundamental sections of mathematics, is also based on researches of the gnoseology conducting cognition of a true picture of the world of the buman being. From gnoseology and ontology positions as fundamental sections of philosophy modern innovative projects are inconceivable without development of programs,and algorithms.
Industrial Applications of Evolutionary Algorithms
Sanchez, Ernesto; Tonda, Alberto
2012-01-01
This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the
Wavelets theory, algorithms, and applications
Montefusco, Laura
2014-01-01
Wavelets: Theory, Algorithms, and Applications is the fifth volume in the highly respected series, WAVELET ANALYSIS AND ITS APPLICATIONS. This volume shows why wavelet analysis has become a tool of choice infields ranging from image compression, to signal detection and analysis in electrical engineering and geophysics, to analysis of turbulent or intermittent processes. The 28 papers comprising this volume are organized into seven subject areas: multiresolution analysis, wavelet transforms, tools for time-frequency analysis, wavelets and fractals, numerical methods and algorithms, and applicat
Parallel algorithms and cluster computing
Hoffmann, Karl Heinz
2007-01-01
This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.
Optimisation combinatoire Theorie et algorithmes
Korte, Bernhard; Fonlupt, Jean
2010-01-01
Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
We here study some problems concerned with the computational analysis of finite partially ordered sets. We begin (in § 1) by showing that the matrix representation of a binary relationR may always be taken in triangular form ifR is a partial ordering. We consider (in § 2) the chain structure...... in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi...
Deceptiveness and genetic algorithm dynamics
Energy Technology Data Exchange (ETDEWEB)
Liepins, G.E. (Oak Ridge National Lab., TN (USA)); Vose, M.D. (Tennessee Univ., Knoxville, TN (USA))
1990-01-01
We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.
A Distributed Spanning Tree Algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A distributed spanning tree algorithm
DEFF Research Database (Denmark)
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge
1988-01-01
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asyncronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
Performance Evaluation of A* Algorithms
Martell, Victor; Sandberg, Aron
2016-01-01
Context. There have been a lot of progress made in the field of pathfinding. One of the most used algorithms is A*, which over the years has had a lot of variations. There have been a number of papers written about the variations of A* and in what way they specifically improve A*. However, few papers have been written comparing A* with several different variations of A*. Objectives. The objectives of this thesis is to find how Dijkstra's algorithm, IDA*, Theta* and HPA* compare against A* bas...
Improved crop residue cover estimates by coupling spectral indices for residue and moisture
Remote sensing assessment of soil residue cover (fR) and tillage intensity will improve our predictions of the impact of agricultural practices and promote sustainable management. Spectral indices for estimating fR are sensitive to soil and residue water content, therefore, the uncertainty of estima...
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...
Residual strains in girth-welded linepipe
International Nuclear Information System (INIS)
MacEwen, S.R.; Holden, T.M.; Powell, B.M.; Lazor, R.B.
1987-07-01
High resolution neutron diffraction has been used to measure the axial residual strains in and adjacent to a multipass girth weld in a complete section of 914 mm (36 inches) diameter, 16 mm (5/8 inch) wall, linepipe. The experiments were carried out at the NRU reactor, Chalk River using the L3 triple-axis spectrometer. The through-wall distribution of axial residual strain was measured at 0, 4, 8, 20 and 50 mm from the weld centerline; the axial variation was determined 1, 5, 8, and 13 mm from the inside surface of the pipe wall. The results have been compared with strain gauge measurements on the weld surface and with through-wall residual stress distributions determined using the block-layering and removal technique
Field Test Kit for Gun Residue Detection
Energy Technology Data Exchange (ETDEWEB)
WALKER, PAMELA K.; RODACY, PHILIP J.
2002-01-01
One of the major needs of the law enforcement field is a product that quickly, accurately, and inexpensively identifies whether a person has recently fired a gun--even if the suspect has attempted to wash the traces of gunpowder off. The Field Test Kit for Gunshot Residue Identification based on Sandia National Laboratories technology works with a wide variety of handguns and other weaponry using gunpowder. There are several organic chemicals in small arms propellants such as nitrocellulose, nitroglycerine, dinitrotoluene, and nitrites left behind after the firing of a gun that result from the incomplete combustion of the gunpowder. Sandia has developed a colorimetric shooter identification kit for in situ detection of gunshot residue (GSR) from a suspect. The test kit is the first of its kind and is small, inexpensive, and easily transported by individual law enforcement personnel requiring minimal training for effective use. It will provide immediate information identifying gunshot residue.
Residual-strength determination in polymetric materials
International Nuclear Information System (INIS)
Christensen, R.M.
1981-01-01
Kinetic theory of crack growth is used to predict the residual strength of polymetric materials acted upon by a previous history. Specifically, the kinetic theory is used to characterize the state of growing damage that occurs under a constant-stress (load) state. The load is removed before failure under creep-rupture conditions, and the residual instantaneous strength is determined from the theory by taking account of the damage accumulation under the preceding constant-load history. The rate of change of residual strength is found to be strongest when the duration of the preceding load history is near the ultimate lifetime under that condition. Physical explanations for this effect are given, as are numerical examples. Also, the theoretical prediction is compared with experimental data
Methyl bromide residues in fumigated cocoa beans
International Nuclear Information System (INIS)
Adomako, D.
1975-01-01
The 14 C activity in unroasted [ 14 C]-methyl bromide fumigated cocoa beans was used to study the fate and persistence of CH 3 Br in the stored beans. About 70% of the residues occurred in the shells. Unchanged CH 3 Br could not be detected, all the sorbed CH 3 Br having reacted with bean constituents apparently to form 14 C-methylated derivatives and inorganic bromide. No 14 C activity was found in the lipid fraction. Roasting decreased the bound (non-volatile) residues, with corresponding changes in the activities and amounts of free sugars, free and protein amino acids. Roasted nibs and shells showed a two-fold increase in the volatile fraction of the 14 C residue. This fraction may be related to the volatile aroma compounds formed by Maillard-type reactions. (author)
Determination of Pesticide Residues in Cannabis Smoke
Directory of Open Access Journals (Sweden)
Nicholas Sullivan
2013-01-01
Full Text Available The present study was conducted in order to quantify to what extent cannabis consumers may be exposed to pesticide and other chemical residues through inhaled mainstream cannabis smoke. Three different smoking devices were evaluated in order to provide a generalized data set representative of pesticide exposures possible for medical cannabis users. Three different pesticides, bifenthrin, diazinon, and permethrin, along with the plant growth regulator paclobutrazol, which are readily available to cultivators in commercial products, were investigated in the experiment. Smoke generated from the smoking devices was condensed in tandem chilled gas traps and analyzed with gas chromatography-mass spectrometry (GC-MS. Recoveries of residues were as high as 69.5% depending on the device used and the component investigated, suggesting that the potential of pesticide and chemical residue exposures to cannabis users is substantial and may pose a significant toxicological threat in the absence of adequate regulatory frameworks.
Bioenergy from agricultural residues in Ghana
DEFF Research Database (Denmark)
Thomsen, Sune Tjalfe
and biomethane under Ghanaian conditions. Detailed characterisations of thirteen of the most common agricultural residues in Ghana are presented, enabling estimations of theoretical bioenergy potentials and identifying specific residues for future biorefinery applications. When aiming at residue-based ethanol...... to pursue increased implementation of anaerobic digestion in Ghana, as the first bioenergy option, since anaerobic digestion is more flexible than ethanol production with regard to both feedstock and scale of production. If possible, the available manure and municipal liquid waste should be utilised first....... A novel model for estimating BMP from compositional data of lignocellulosic biomasses is derived. The model is based on a statistical method not previously used in this area of research and the best prediction of BMP is: BMP = 347 xC+H+R – 438 xL + 63 DA , where xC+H+R is the combined content of cellulose...
Zhang, Lin; Yin, Na; Fu, Xiong; Lin, Qiaomin; Wang, Ruchuan
2017-03-08
With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR). This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes' reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes' communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service.
Directory of Open Access Journals (Sweden)
Lin Zhang
2017-03-01
Full Text Available With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR. This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes’ reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes’ communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service.
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
THE RECURRENT ALGORITHM FOR INTERFEROMETRIC SIGNALS PROCESSING BASED ON MULTI-CLOUD PREDICTION MODEL
Directory of Open Access Journals (Sweden)
I. P. Gurov
2014-07-01
Full Text Available The paper deals with modification of the recurrent processing algorithm for discrete sequence of interferometric signal samples. The algorithm is based on subsequent reference signal prediction at specifying a set (“cloud” of values for signal parameters vector by Monte Carlo method, comparison with the measured signal value and usage of the residual for enhancing the values of signal parameters at each discretization step. The concept of multi-cloud prediction model is used in the proposed modified algorithm. A set of normally distributed clouds is created with expectation values selected on the base of criterion of minimum residual between prediction and observation values. Experimental testing of the proposed method applied to estimation of fringe initial phase in the phase shifting interferometry has been conducted. The estimate variance of the signal reconstructed according to estimated initial phase from initial signal does not exceed 2% of the maximum signal value. It has been shown that the proposed algorithm application makes it possible to avoid the 2π-ambiguity and ensure sustainable recovery of interference fringes phase of a complicated type without involving a priori information about interference fringe phase distribution. The usage of the proposed algorithm applied to estimation of interferometric signals parameters gives the possibility for improving the filter stability with respect to influence of random noise and decreasing requirements for accuracy of a priori filtration parameters setting as compared with conventional (single-cloud implementation of the sequential Monte Carlo method.
DEFF Research Database (Denmark)
Creixell, Pau; Schoof, Erwin M.; Tan, Chris Soon Heng
2012-01-01
in terms of their mutational activity. Moreover, we highlight the importance of the genetic code and physico-chemical properties of the amino acid residues as likely causes of these inequalities and uncover serine as a mutational hot spot. Finally, we explore the consequences that these different......; it is typically assumed that all amino acid residues are equally likely to mutate or to result from a mutation. Here, by reconstructing ancestral sequences and computing mutational probabilities for all the amino acid residues, we refute this assumption and show extensive inequalities between different residues...... mutational properties have on phosphorylation site evolution, showing that a higher degree of evolvability exists for phosphorylated threonine and, to a lesser extent, serine in comparison with tyrosine residues. As exemplified by the suppression of serine's mutational activity in phosphorylation sites, our...
Chen, Peng
2013-07-23
Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.
Analysis and Improvement of Fireworks Algorithm
Xi-Guang Li; Shou-Fei Han; Chang-Qing Gong
2017-01-01
The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opp...
A survey of parallel multigrid algorithms
Chan, Tony F.; Tuminaro, Ray S.
1987-01-01
A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.
40 CFR 180.564 - Indoxacarb; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Indoxacarb; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.564 Indoxacarb; tolerances for residues. (a) General. Tolerances are established for residues of...
Feeding potential of summer grain crop residues for woolled sheep ...
African Journals Online (AJOL)
greater amounts than indicated in Table 2. Percentage utilization of residues. Using the values obtained from quadrat sampling of the residues before and after grazing, the percentage utilization of residue components could be estimated. The results are shown in Table 3. Table 3 Percentage utilization a of residues. Lupins.
40 CFR 180.176 - Mancozeb; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Mancozeb; tolerances for residues. 180... PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.176 Mancozeb; tolerances for residues. (a) General. Tolerances for residues of a fungicide which is a...
40 CFR 180.324 - Bromoxynil; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Bromoxynil; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.324 Bromoxynil; tolerances for residues. (a) General. (1) Tolerances are established for residues...
40 CFR 180.314 - Triallate; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Triallate; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.314 Triallate; tolerances for residues. (a) General. Tolerances are established for residues of...
40 CFR 180.210 - Bromacil; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Bromacil; tolerances for residues. 180... PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.210 Bromacil; tolerances for residues. (a) General. Tolerances are established for residues of the herbicide...
40 CFR 279.47 - Management of residues.
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Management of residues. 279.47 Section... Management of residues. Transporters who generate residues from the storage or transport of used oil must manage the residues as specified in § 279.10(e). ...
40 CFR 180.298 - Methidathion; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Methidathion; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.298 Methidathion; tolerances for residues. (a) General. Tolerances are established for residues of...
40 CFR 180.299 - Dicrotophos; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Dicrotophos; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.299 Dicrotophos; tolerances for residues. (a) General. Tolerances are established for residues of...
40 CFR 180.227 - Dicamba; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Dicamba; tolerances for residues. 180... PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.227 Dicamba; tolerances for residues. (a) General. (1) Tolerances are established for the combined residues of...
40 CFR 180.209 - Terbacil; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Terbacil; tolerances for residues. 180... PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.209 Terbacil; tolerances for residues. (a) General. Tolerances are established for combined residues of the...
40 CFR 180.249 - Alachlor; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Alachlor; tolerances for residues. 180... PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.249 Alachlor; tolerances for residues. (a) General. Tolerances are established for combined residues of...
40 CFR 180.128 - Pyrethrins; tolerances for residues.
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Pyrethrins; tolerances for residues...) PESTICIDE PROGRAMS TOLERANCES AND EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Specific Tolerances § 180.128 Pyrethrins; tolerances for residues. (a) General. (1) Tolerances for residues of the...
40 CFR 279.67 - Management of residues.
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Management of residues. 279.67 Section... for Energy Recovery § 279.67 Management of residues. Burners who generate residues from the storage or burning of used oil must manage the residues as specified in § 279.10(e). ...