WorldWideScience

Sample records for em algorithm oslem

  1. EM Algorithm and Stochastic Control in Economics

    OpenAIRE

    Kou, Steven; Peng, Xianhua; Xu, Xingbo

    2016-01-01

    Generalising the idea of the classical EM algorithm that is widely used for computing maximum likelihood estimates, we propose an EM-Control (EM-C) algorithm for solving multi-period finite time horizon stochastic control problems. The new algorithm sequentially updates the control policies in each time period using Monte Carlo simulation in a forward-backward manner; in other words, the algorithm goes forward in simulation and backward in optimization in each iteration. Similar to the EM alg...

  2. Problems with EM Algorithms for ML Factor Analysis.

    Science.gov (United States)

    Bentler, P. M.; Tanaka, Jeffrey S.

    1983-01-01

    Rubin and Thayer recently presented equations to implement maximum likelihood estimation in factor analysis via the EM algorithm. It is argued here that the advantages of using the EM algorithm remain to be demonstrated. (Author/JKS)

  3. The EM Algorithm for Latent Class Analysis with Equality Constraints.

    Science.gov (United States)

    Mooijaart, Ab; van der Heijden, Peter G. M.

    1992-01-01

    It is shown that it is not easy to apply the EM algorithm to latent class models in the general case with equality constraints because a nonlinear equation has to be solved. A simpler condition is given in which the EM algorithm can be easily applied. (SLD)

  4. A Trust Region Aggressive Space Mapping Algorithm for EM

    DEFF Research Database (Denmark)

    Bakr., M.; Bandler, J. W.; Biernacki, R.

    1998-01-01

    A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse....... This suggested step exploits all the available EM simulations for improving the uniqueness of parameter extraction. The new algorithm was successfully used to design a number of microwave circuits. Examples include the EM optimization of a double-folded stub filter and of a high-temperature superconducting (HTS......) filter using Sonnet's em. The proposed algorithm was also used to design two-section, three-section, and seven-section waveguide transformers exploiting Maxwell Eminence. The design of a three-section waveguide transformer with rounded corners was carried out using HP HFSS. We show how the mapping can...

  5. The EM Algorithm and the Rise of Computational Biology

    OpenAIRE

    Fan, Xiaodan; Yuan, Yuan; Liu, Jun S.

    2011-01-01

    In the past decade computational biology has grown from a cottage industry with a handful of researchers to an attractive interdisciplinary field, catching the attention and imagination of many quantitatively-minded scientists. Of interest to us is the key role played by the EM algorithm during this transformation. We survey the use of the EM algorithm in a few important computational biology problems surrounding the "central dogma"; of molecular biology: from DNA to RNA and then to proteins....

  6. Mixture densities, maximum likelihood, and the EM algorithm

    Science.gov (United States)

    Redner, R. A.; Walker, H. F.

    1982-01-01

    The problem of estimating the parameters which determine a mixture density is reviewed as well as maximum likelihood estimation for it. A particular iterative procedure for numerically approximating maximum likelihood estimates for mixture density problems is considered. This EM algorithm, is a specialization to the mixture density context of a general algorithm of the same name used to approximate maximum likelihood estimates for incomplete data problems. The formulation and theoretical and practical properties of the EM algorithm for mixture densities are discussed focussing in particular on mixtures of densities from exponential families.

  7. Application of the EM algorithm to radiographic images.

    Science.gov (United States)

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  8. Interpolation Algorithm for Fast Evaluation of EM Coupling between Wires

    NARCIS (Netherlands)

    Marasini, C.; Lepelaars, E.S.A.M.; Zwamborn, A.P.M.

    2009-01-01

    Efficient and accurate evaluation of the EM field radiated by a current flowing along a wire is essential to solve the electromagnetic coupling between arbitrary oriented wires. In this paper, a numerically efficient algorithm for the evaluation of coupling is presented. The currents along the wires

  9. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  10. Noise properties of the EM algorithm. Pt. 1

    International Nuclear Information System (INIS)

    Barrett, H.H.; Wilson, D.W.; Tsui, B.M.W.

    1994-01-01

    The expectation-maximisation (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. (author)

  11. Marginal Maximum Likelihood Estimation of Item Parameters: Application of an EM Algorithm.

    Science.gov (United States)

    Bock, R. Darrell; Aitkin, Murray

    1981-01-01

    The practicality of using the EM algorithm for maximum likelihood estimation of item parameters in the marginal distribution is presented. The EM procedure is shown to apply to general item-response models. (Author/JKS)

  12. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  13. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  14. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  15. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  16. EM algorithm for one-shot device testing with competing risks under exponential distribution

    International Nuclear Information System (INIS)

    Balakrishnan, N.; So, H.Y.; Ling, M.H.

    2015-01-01

    This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate

  17. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  18. State-space models - from the EM algorithm to a gradient approach

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue

    2007-01-01

    Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...

  19. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  20. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  1. The Relationship between the Bock-Aitkin Procedure and the EM Algorithm for IRT Model Estimation.

    Science.gov (United States)

    Hsu, Yaowen; Ackerman, Terry A.; Fan, Meichu

    It has previously been shown that the Bock-Aitkin procedure (R. Bock and M. Aitkin, 1981) is an instance of the EM algorithm when trying to find the marginal maximum likelihood estimate for a discrete latent ability variable (latent trait). In this paper, it is shown that the Bock-Aitkin procedure is a numerical implementation of the EM algorithm…

  2. Fitting mixtures of Erlangs to censored and truncated data using the EM algorithm

    NARCIS (Netherlands)

    Verbelen, R.; Gong, L.; Antonio, K.; Badescu, A.; Lin, S.

    2015-01-01

    We discuss how to fit mixtures of Erlangs to censored and truncated data by iteratively using the EM algorithm. Mixtures of Erlangs form a very versatile, yet analytically tractable, class of distributions making them suitable for loss modeling purposes. The effectiveness of the proposed algorithm

  3. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    Directory of Open Access Journals (Sweden)

    Van Dyk David A

    2000-03-01

    Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.

  4. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  5. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    Science.gov (United States)

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  6. A Hybrid Aggressive Space Mapping Algorithm for EM Optimization

    DEFF Research Database (Denmark)

    Bakr, M.; Bandler, J. W.; Georgieva, N.

    1999-01-01

    We present a novel, Hybrid Aggressive Space Mapping (HASM) optimization algorithm. HASM is a hybrid approach exploiting both the Trust Region Aggressive Space Mapping (TRASM) algorithm and direct optimization. It does not assume that the final space-mapped design is the true optimal design...

  7. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  8. EM algorithm applied for estimating non-stationary region boundaries using electrical impedance tomography

    Science.gov (United States)

    Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.

    2010-04-01

    EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.

  9. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  10. Robust Mean Change-Point Detecting through Laplace Linear Regression Using EM Algorithm

    Directory of Open Access Journals (Sweden)

    Fengkai Yang

    2014-01-01

    normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.

  11. A Hybrid Aggressive Space Mapping Algorithm for EM Optimization

    DEFF Research Database (Denmark)

    Bakr, Mohamed H.; Bandler, John W.; Georgieva, N.

    1999-01-01

    in a smooth way. The uniqueness of the extraction step is improved by utilizing a good starting point. The algorithm does not assume that the final space-mapped design is the true optimal design and is robust against severe misalignment between the coarse and fine models. The examples include a seven...

  12. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    Science.gov (United States)

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  13. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2016-01-06

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

  14. The Statistical Analysis of General Processing Tree Models with the EM Algorithm.

    Science.gov (United States)

    Hu, Xiangen; Batchelder, William H.

    1994-01-01

    The statistical analysis of processing tree models is advanced by showing how the parameters of estimation and hypothesis testing, based on the likelihood functions, can be accomplished by adapting the expectation-maximization (EM) algorithm. The adaptation makes it easy to program a personal computer to accomplish the stages of statistical…

  15. A Note on Parameter Estimation for Lazarsfeld's Latent Class Model Using the EM Algorithm.

    Science.gov (United States)

    Everitt, B. S.

    1984-01-01

    Latent class analysis is formulated as a problem of estimating parameters in a finite mixture distribution. The EM algorithm is used to find the maximum likelihood estimates, and the case of categorical variables with more than two categories is considered. (Author)

  16. Estimation of Item Response Models Using the EM Algorithm for Finite Mixtures.

    Science.gov (United States)

    Woodruff, David J.; Hanson, Bradley A.

    This paper presents a detailed description of maximum parameter estimation for item response models using the general EM algorithm. In this paper the models are specified using a univariate discrete latent ability variable. When the latent ability variable is discrete the distribution of the observed item responses is a finite mixture, and the EM…

  17. The Rasch Poisson Counts Model for Incomplete Data: An Application of the EM Algorithm.

    Science.gov (United States)

    Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson counts model is a latent trait model for the situation in which "K" tests are administered to "N" examinees and the test score is a count (repeated number of some event). A mixed model is presented that applies the EM algorithm and that can allow for missing data. (SLD)

  18. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  19. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  20. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  1. Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography

    Directory of Open Access Journals (Sweden)

    Kiyoko Tateishi

    2017-01-01

    Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.

  2. A cross-validation procedure for stopping the EM algorithm and deconvolution of neutron depth profiling spectra

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J. (National Inst. of Standards and Technology, Statistical Engineering Div., Gaithersburg, MD (US))

    1991-02-01

    The iterative EM algorithm is used to deconvolve neutron depth profiling spectra. Because of statistical noise in the data, artifacts in the estimated particle emission rate profile appear after too many iterations of the EM algorithm. To avoid artifacts, the EM algorithm is stopped using a cross-validation procedure. The data are split into two independent halves. The EM algorithm is applied to one half of the data to get an estimate of the emission rates. The algorithm is stopped when the conditional likelihood of the other half of the data passes through its maximum. The roles of the two halves of the data are then switched to get a second estimate of the emission rates. The two estimates are then averaged.

  3. A Linear Time Algorithm for the <em>k> Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  4. Mean field theory of EM algorithm for Bayesian grey scale image restoration

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Tanaka, Kazuyuki

    2003-01-01

    The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

  5. Application and performance of an ML-EM algorithm in NEXT

    Science.gov (United States)

    Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2017-08-01

    The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.

  6. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  7. Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs

    Science.gov (United States)

    Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis

    2012-10-01

    There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)

  8. Maximum-Likelihood Semiblind Equalization of Doubly Selective Channels Using the EM Algorithm

    Directory of Open Access Journals (Sweden)

    Gideon Kutz

    2010-01-01

    Full Text Available Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.

  9. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  10. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  11. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  12. A multicenter evaluation of seven commercial ML-EM algorithms for SPECT image reconstruction using simulation data

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo

    2003-01-01

    The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)

  13. A fast EM algorithm for BayesA-like prediction of genomic breeding values.

    Directory of Open Access Journals (Sweden)

    Xiaochen Sun

    Full Text Available Prediction accuracies of estimated breeding values for economically important traits are expected to benefit from genomic information. Single nucleotide polymorphism (SNP panels used in genomic prediction are increasing in density, but the Markov Chain Monte Carlo (MCMC estimation of SNP effects can be quite time consuming or slow to converge when a large number of SNPs are fitted simultaneously in a linear mixed model. Here we present an EM algorithm (termed "fastBayesA" without MCMC. This fastBayesA approach treats the variances of SNP effects as missing data and uses a joint posterior mode of effects compared to the commonly used BayesA which bases predictions on posterior means of effects. In each EM iteration, SNP effects are predicted as a linear combination of best linear unbiased predictions of breeding values from a mixed linear animal model that incorporates a weighted marker-based realized relationship matrix. Method fastBayesA converges after a few iterations to a joint posterior mode of SNP effects under the BayesA model. When applied to simulated quantitative traits with a range of genetic architectures, fastBayesA is shown to predict GEBV as accurately as BayesA but with less computing effort per SNP than BayesA. Method fastBayesA can be used as a computationally efficient substitute for BayesA, especially when an increasing number of markers bring unreasonable computational burden or slow convergence to MCMC approaches.

  14. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    International Nuclear Information System (INIS)

    Kim, Dae Won

    2005-01-01

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  15. Voice Activity Detection Based on High Order Statistics and Online EM Algorithm

    Science.gov (United States)

    Cournapeau, David; Kawahara, Tatsuya

    A new online, unsupervised voice activity detection (VAD) method is proposed. The method is based on a feature derived from high-order statistics (HOS), enhanced by a second metric based on normalized autocorrelation peaks to improve its robustness to non-Gaussian noises. This feature is also oriented for discriminating between close-talk and far-field speech, thus providing a VAD method in the context of human-to-human interaction independent of the energy level. The classification is done by an online variation of the Expectation-Maximization (EM) algorithm, to track and adapt to noise variations in the speech signal. Performance of the proposed method is evaluated on an in-house data and on CENSREC-1-C, a publicly available database used for VAD in the context of automatic speech recognition (ASR). On both test sets, the proposed method outperforms a simple energy-based algorithm and is shown to be more robust against the change in speech sparsity, SNR variability and the noise type.

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  17. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  18. The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.

    Science.gov (United States)

    Thomas, Neal

    Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…

  19. [Comparison of OS-EM reconstruction algorithms among different processors using a digital phantom dedicated for SPECT data evaluation].

    Science.gov (United States)

    Matsutomo, Norikazu; Furuya, Hiroaki; Yamao, Taichirou; Nishiyama, Norimi; Suruga, Takefumi; Sugino, Shuichi; Fujihara, Shuuji; Yoshioka, Ryuji

    2008-11-20

    In the OS-EM method, reconfigured images may be different among reconstruction image processors because the programs are highly arbitrary. The purpose of this study was to evaluate the difference in OS-EM algorithms among four different image processors using a digital phantom dedicated for SPECT data evaluation. The image processors used were GMS-5500A/PI (Toshiba), GENIE Xeleris (GE), e.soft (Siemens), and Odyssey FX (Shimadzu). Multiple images were reconstructed with OS-EM fluctuating the number of subsets and iterations. A region of interest (ROI) was placed on each image. The average counts, contrast, root mean square uncertainty (%RMSU), and normalized mean squared error (NMSE) of each ROI were calculated and compared with those of different image processors. There was no significant difference in contrast among the algorithms. However, the average counts and (%RMSU were significantly different among algorithms as the number of updates increased. In addition, the minimums of NMSE were also different among algorithms. In the OS-EM method, careful evaluation is necessary when using multiple image processors in research studies on the standardization of nuclear medicine imaging or in clinical applications.

  20. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  1. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  2. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  3. Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.

    Science.gov (United States)

    Harwell, Michael R.; And Others

    1988-01-01

    The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)

  4. A system for the 3D reconstruction of retracted-septa PET data using the EM algorithm

    International Nuclear Information System (INIS)

    Johnson, C.A.; Yan, Y.; Carson, R.E.; Martino, R.L.; Daube-Witherspoon, M.E.

    1995-01-01

    The authors have implemented the EM reconstruction algorithm for volume acquisition from current generation retracted-septa PET scanners. Although the software was designed for a GE Advance scanner, it is easily adaptable to other 3D scanners. The reconstruction software was written for an Intel iPSC/860 parallel computer with 128 compute nodes. Running on 32 processors, the algorithm requires approximately 55 minutes per iteration to reconstruct a 128 x 128 x 35 image. No projection data compression schemes or other approximations were used in the implementation. Extensive use of EM system matrix (C ij ) symmetries (including the 8-fold in-plane symmetries, 2-fold axial symmetries, and axial parallel line redundancies) reduces the storage cost by a factor of 188. The parallel algorithm operates on distributed projection data which are decomposed by base-symmetry angles. Symmetry operators copy and index the C ij chord to the form required for the particular symmetry. The use of asynchronous reads, lookup tables, and optimized image indexing improves computational performance

  5. Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images.

    Science.gov (United States)

    Zhao, Jianhua; Brubaker, Marcus A; Benlekbir, Samir; Rubinstein, John L

    2015-11-01

    Single particle electron cryomicroscopy (cryo-EM) allows for structures of proteins and protein complexes to be determined from images of non-crystalline specimens. Cryo-EM data analysis requires electron microscope images of randomly oriented ice-embedded protein particles to be rotated and translated to allow for coherent averaging when calculating three-dimensional (3D) structures. Rotation of 2D images is usually done with the assumption that the magnification of the electron microscope is the same in all directions. However, due to electron optical aberrations, this condition is not met with some electron microscopes when used with the settings necessary for cryo-EM with a direct detector device (DDD) camera. Correction of images by linear interpolation in real space has allowed high-resolution structures to be calculated from cryo-EM images for symmetric particles. Here we describe and compare a simple real space method, a simple Fourier space method, and a somewhat more sophisticated Fourier space method to correct images for a measured anisotropy in magnification. Further, anisotropic magnification causes contrast transfer function (CTF) parameters estimated from image power spectra to have an apparent systematic astigmatism. To address this problem we develop an approach to adjust CTF parameters measured from distorted images so that they can be used with corrected images. The effect of anisotropic magnification on CTF parameters provides a simple way of detecting magnification anisotropy in cryo-EM datasets. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  7. A Linear Time Algorithm for the em>k> Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

    Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  8. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  9. Algoritmo genético em química Genetic algorithm in Chemistry

    Directory of Open Access Journals (Sweden)

    Paulo Augusto da Costa Filho

    1999-06-01

    Full Text Available Genetic algorithm is an optimization technique based on Darwin evolution theory. In last years its application in chemistry is increasing significantly due the special characteristics for optimization of complex systems. The basic principles and some further modifications implemented to improve its performance are presented, as well as a historical development. A numerical example of a function optimization is also shown to demonstrate how the algorithm works in an optimization process. Finally several chemistry applications realized until now is commented to serve as parameter to future applications in this field.

  10. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Oliver, Emerson; Smith, Austin

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.

  11. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    Science.gov (United States)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  12. Implementation of the EM Algorithm in the Estimation of Item Parameters: The BILOG Computer Program.

    Science.gov (United States)

    Mislevy, Robert J.; Bock, R. Darrell

    This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…

  13. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  14. Iterative sure independence screening EM-Bayesian LASSO algorithm for multi-locus genome-wide association studies

    Science.gov (United States)

    Tamba, Cox Lwaka; Ni, Yuan-Li; Zhang, Yuan-Ming

    2017-01-01

    Genome-wide association study (GWAS) entails examining a large number of single nucleotide polymorphisms (SNPs) in a limited sample with hundreds of individuals, implying a variable selection problem in the high dimensional dataset. Although many single-locus GWAS approaches under polygenic background and population structure controls have been widely used, some significant loci fail to be detected. In this study, we used an iterative modified-sure independence screening (ISIS) approach in reducing the number of SNPs to a moderate size. Expectation-Maximization (EM)-Bayesian least absolute shrinkage and selection operator (BLASSO) was used to estimate all the selected SNP effects for true quantitative trait nucleotide (QTN) detection. This method is referred to as ISIS EM-BLASSO algorithm. Monte Carlo simulation studies validated the new method, which has the highest empirical power in QTN detection and the highest accuracy in QTN effect estimation, and it is the fastest, as compared with efficient mixed-model association (EMMA), smoothly clipped absolute deviation (SCAD), fixed and random model circulating probability unification (FarmCPU), and multi-locus random-SNP-effect mixed linear model (mrMLM). To further demonstrate the new method, six flowering time traits in Arabidopsis thaliana were re-analyzed by four methods (New method, EMMA, FarmCPU, and mrMLM). As a result, the new method identified most previously reported genes. Therefore, the new method is a good alternative for multi-locus GWAS. PMID:28141824

  15. EMMIXuskew: An R Package for Fitting Mixtures of Multivariate Skew t Distributions via the EM Algorithm

    Directory of Open Access Journals (Sweden)

    Geoff McLachlan

    2013-11-01

    The usefulness of the proposed algorithm is demonstrated in three applications to real datasets. The first example illustrates the use of the main function fmmst in the package by fitting a MST distribution to a bivariate unimodal flow cytometric sample. The second example fits a mixture of MST distributions to the Australian Institute of Sport (AIS data, and demonstrates that EMMIXuskew can provide better clustering results than mixtures with restricted MST components. In the third example, EMMIXuskew is applied to classify cells in a trivariate flow cytometric dataset. Comparisons with some other available methods suggest that EMMIXuskew achieves a lower misclassification rate with respect to the labels given by benchmark gating analysis.

  16. EMHP: an accurate automated hole masking algorithm for single-particle cryo-EM image processing.

    Science.gov (United States)

    Berndsen, Zachary; Bowman, Charles; Jang, Haerin; Ward, Andrew B

    2017-12-01

    The Electron Microscopy Hole Punch (EMHP) is a streamlined suite of tools for quick assessment, sorting and hole masking of electron micrographs. With recent advances in single-particle electron cryo-microscopy (cryo-EM) data processing allowing for the rapid determination of protein structures using a smaller computational footprint, we saw the need for a fast and simple tool for data pre-processing that could run independent of existing high-performance computing (HPC) infrastructures. EMHP provides a data preprocessing platform in a small package that requires minimal python dependencies to function. https://www.bitbucket.org/chazbot/emhp Apache 2.0 License. bowman@scripps.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  17. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    International Nuclear Information System (INIS)

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da; Schelin, Hugo R.; Yevseyeva, Olga; Klock, Márgio C. L.

    2013-01-01

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128×128×128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  18. EM&AA: An Algorithm for Predicting the Course Selection by Student in e-Learning Using Data Mining Techniques

    Science.gov (United States)

    Aher, Sunita B.

    2014-01-01

    Recommendation systems have been widely used in internet activities whose aim is to present the important and useful information to the user with little effort. Course Recommendation System is system which recommends to students the best combination of courses in engineering education system e.g. if student is interested in course like system programming then he would like to learn the course entitled compiler construction. The algorithm with combination of two data mining algorithm i.e. combination of Expectation Maximization Clustering and Apriori Association Rule Algorithm have been developed. The result of this developed algorithm is compared with Apriori Association Rule Algorithm which is an existing algorithm in open source data mining tool Weka.

  19. Nuclear reactors project optimization based on neural network and genetic algorithm; Otimizacao em projetos de reatores nucleares baseada em rede neural e algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1997-12-01

    This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs.

  20. High Performance Computing of Complex Electromagnetic Algorithms Based on GPU/CPU Heterogeneous Platform and Its Applications to EM Scattering and Multilayered Medium Structure

    Directory of Open Access Journals (Sweden)

    Zhe Song

    2017-01-01

    Full Text Available The fast and accurate numerical analysis for large-scale objects and complex structures is essential to electromagnetic simulation and design. Comparing to the exploration in EM algorithms from mathematical point of view, the computer programming realization is coordinately significant while keeping up with the development of hardware architectures. Unlike the previous parallel algorithms or those implemented by means of parallel programming on multicore CPU with OpenMP or on a cluster of computers with MPI, the new type of large-scale parallel processor based on graphics processing unit (GPU has shown impressive ability in various scenarios of supercomputing, while its application in computational electromagnetics is especially expected. This paper introduces our recent work on high performance computing based on GPU/CPU heterogeneous platform and its application to EM scattering problems and planar multilayered medium structure, including a novel realization of OpenMP-CUDA-MLFMM, a developed ACA method and a deeply optimized CG-FFT method. With fruitful numerical examples and their obvious enhancement in efficiencies, it is convincing to keep on deeply investigating and understanding the computer hardware and their operating mechanism in the future.

  1. An Efficient Algorithm for EM Scattering from Anatomically Realistic Human Head Model Using Parallel CG-FFT Method

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2014-01-01

    Full Text Available An efficient algorithm is proposed to analyze the electromagnetic scattering problem from a high resolution head model with pixel data format. The algorithm is based on parallel technique and the conjugate gradient (CG method combined with the fast Fourier transform (FFT. Using the parallel CG-FFT method, the proposed algorithm is very efficient and can solve very electrically large-scale problems which cannot be solved using the conventional CG-FFT method in a personal computer. The accuracy of the proposed algorithm is verified by comparing numerical results with analytical Mie-series solutions for dielectric spheres. Numerical experiments have demonstrated that the proposed method has good performance on parallel efficiency.

  2. Solving the Secondary Structure Matching Problem in Cryo-EM De Novo Modeling Using a Constrained K-Shortest Path Graph Algorithm.

    Science.gov (United States)

    Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing

    2014-01-01

    Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.

  3. Tratamento da hipertensão arterial com olmesartana medoxomila em escalonamento Based treatment algorithm for essencial hypertension with olmesartan medoxomil

    Directory of Open Access Journals (Sweden)

    Marco Antônio Mota Gomes

    2008-09-01

    Full Text Available FUNDAMENTO: As diretrizes nacionais e internacionais enfatizam a importância do tratamento eficaz da hipertensão arterial. Apesar disso, verificam-se baixos índices de controle e alcance das metas preconizadas, indicando que é importante planejar e implementar melhores estratégias de tratamento. OBJETIVO: Avaliar a eficácia de um tratamento, em escalonamento de doses, tendo como base a olmesartana medoxomila. MÉTODOS: Este é um estudo aberto, nacional, multicêntrico e prospectivo, de 144 pacientes com hipertensão arterial primária nos estágios 1 e 2, virgens de tratamento ou após período de washout de duas a três semanas para aqueles em tratamento ineficaz. Avaliou-se o uso da olmesartana medoxomila num algoritmo de tratamento, em quatro fases: (i monoterapia (20 mg, (ii-iii associada à hidroclorotiazida (20/12,5 mg e 40/25 mg e (iv adição de besilato de anlodipino (40/25 mg + 5 mg. RESULTADOS: Ao fim do tratamento, em escalonamento, 86% dos sujeitos de pesquisa alcançaram a meta de pressão arterial (PA 20 mmHg foi de 87,5% e diastólicos (PAD > 10 mmHg de 92,4%. CONCLUSÃO: O estudo se baseou em um esquema de tratamento semelhante à abordagem terapêutica da prática clínica diária e mostrou que o uso da olmesartana medoxomila, em monoterapia ou em associação a hidroclorotiazida e anlodipino, foi eficaz para o alcance de meta para hipertensos dos estágios 1 e 2.BACKGROUND: The national and international guidelines emphasize the importance of the effective treatment of essenssial hypertension. Nevertheless, low levels of control are observed, as well as low attainment of the recommended goals, indicating that it is important to plan and implement better treatment strategies. OBJECTIVE: To evaluate the efficacy of a based treatment algorithm with olmesartan medoxomil. METHODS: This is an open, national, multicentric and prospective study of 144 patients with primary arterial hypertension, stages 1 and 2, naïve to

  4. Extended Algorithm for Simulation of Light Transport in Single Crystal Scintillation Detectors for S(T)EM

    Czech Academy of Sciences Publication Activity Database

    Schauer, Petr

    2007-01-01

    Roč. 29, č. 6 (2007), s. 249-253 ISSN 0161-0457 R&D Projects: GA ČR GA102/04/2144 Institutional research plan: CEZ:AV0Z20650511 Keywords : Monte Carlo simulation * photon transport * scintillation detector * single crystal scintillator * lightguides * signal processing * SEM * S(T)EM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.324, year: 2007

  5. A Dynamic Programming Algorithm for Finding the Optimal Placement of a Secondary Structure Topology in Cryo-EM Data.

    Science.gov (United States)

    Biswas, Abhishek; Ranjan, Desh; Zubair, Mohammad; He, Jing

    2015-09-01

    The determination of secondary structure topology is a critical step in deriving the atomic structures from the protein density maps obtained from electron cryomicroscopy technique. This step often relies on matching the secondary structure traces detected from the protein density map to the secondary structure sequence segments predicted from the amino acid sequence. Due to inaccuracies in both sources of information, a pool of possible secondary structure positions needs to be sampled. One way to approach the problem is to first derive a small number of possible topologies using existing matching algorithms, and then find the optimal placement for each possible topology. We present a dynamic programming method of Θ(Nq(2)h) to find the optimal placement for a secondary structure topology. We show that our algorithm requires significantly less computational time than the brute force method that is in the order of Θ(q(N) h).

  6. An FDTD algorithm for simulation of EM waves propagation in laser with static and dynamic gain models

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-01-01

    This paper presents methods of simulating gain media in the finite difference time-domain (FDTD) algorithm utilizing a generalized polarization formulation. The gain can be static or dynamic. For static gain, Lorentzian and non-Lorentzian models are presented and tested. For the dynamic gain, rate equations for two-level and four-level models are incorporated in the FDTD scheme. The simulation results conform with the expected behavior of wave amplification and dynamic population inversion.

  7. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    Science.gov (United States)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-12-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  8. Evaluation of a novel algorithm for primary mass casualty triage by paramedics in a physician manned EMS system: a dummy based trial

    Science.gov (United States)

    2014-01-01

    Background The Amberg-Schwandorf Algorithm for Primary Triage (ASAV) is a novel primary triage concept specifically for physician manned emergency medical services (EMS) systems. In this study, we determined the diagnostic reliability and the time requirements of ASAV triage. Methods Seven hundred eighty triage runs performed by 76 trained EMS providers of varying professional qualification were included into the study. Patients were simulated using human dummies with written vital signs sheets. Triage results were compared to a standard solution, which was developed in a modified Delphi procedure. Test performance parameters (e.g. sensitivity, specificity, likelihood ratios (LR), under-triage, and over-triage) were calculated. Time measurements comprised the complete triage and tagging process and included the time span for walking to the subsequent patient. Results were compared to those published for mSTaRT. Additionally, a subgroup analysis was performed for employment status (career/volunteer), team qualification, and previous triage training. Results For red patients, ASAV sensitivity was 87%, specificity 91%, positive LR 9.7, negative LR 0.139, over-triage 6%, and under-triage 10%. There were no significant differences related to mSTaRT. Per patient, ASAV triage required a mean of 35.4 sec (75th percentile 46 sec, 90th percentile 58 sec). Volunteers needed slightly more time to perform triage than EMS professionals. Previous mSTaRT training of the provider reduced under-triage significantly. There were significant differences in time requirements for triage depending on the expected triage category. Conclusions The ASAV is a specific concept for primary triage in physician governed EMS systems. It may detect red patients reliably. The test performance criteria are comparable to that of mSTaRT, whereas ASAV triage might be accomplished slightly faster. From the data, there was no evidence for a clinically significant reliability difference between typical

  9. Evaluation of a novel algorithm for primary mass casualty triage by paramedics in a physician manned EMS system: a dummy based trial.

    Science.gov (United States)

    Wolf, Philipp; Bigalke, Marc; Graf, Bernhard M; Birkholz, Torsten; Dittmar, Michael S

    2014-08-28

    The Amberg-Schwandorf Algorithm for Primary Triage (ASAV) is a novel primary triage concept specifically for physician manned emergency medical services (EMS) systems. In this study, we determined the diagnostic reliability and the time requirements of ASAV triage. Seven hundred eighty triage runs performed by 76 trained EMS providers of varying professional qualification were included into the study. Patients were simulated using human dummies with written vital signs sheets. Triage results were compared to a standard solution, which was developed in a modified Delphi procedure. Test performance parameters (e.g. sensitivity, specificity, likelihood ratios (LR), under-triage, and over-triage) were calculated. Time measurements comprised the complete triage and tagging process and included the time span for walking to the subsequent patient. Results were compared to those published for mSTaRT. Additionally, a subgroup analysis was performed for employment status (career/volunteer), team qualification, and previous triage training. For red patients, ASAV sensitivity was 87%, specificity 91%, positive LR 9.7, negative LR 0.139, over-triage 6%, and under-triage 10%. There were no significant differences related to mSTaRT. Per patient, ASAV triage required a mean of 35.4 sec (75th percentile 46 sec, 90th percentile 58 sec). Volunteers needed slightly more time to perform triage than EMS professionals. Previous mSTaRT training of the provider reduced under-triage significantly. There were significant differences in time requirements for triage depending on the expected triage category. The ASAV is a specific concept for primary triage in physician governed EMS systems. It may detect red patients reliably. The test performance criteria are comparable to that of mSTaRT, whereas ASAV triage might be accomplished slightly faster. From the data, there was no evidence for a clinically significant reliability difference between typical staffing of mobile intensive care units

  10. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  11. Modelo hipercubo integrado a um algoritmo genético para análise de sistemas médicos emergenciais em rodovias The hypercube queuing model integrated to a genetic algorithm to analyze emergency medical systems on highways

    Directory of Open Access Journals (Sweden)

    Ana Paula Iannoni

    2006-04-01

    Full Text Available O modelo hipercubo, conhecido na literatura de problemas de localização de sistemas servidor para cliente, é um modelo baseado em teoria de filas espacialmente distribuídas e aproximações Markovianas. O modelo pode ser modificado para analisar os sistemas de atendimentos emergenciais (SAEs em rodovias, considerando as particularidades da política de despacho destes sistemas. Neste estudo, combinou-se o modelo hipercubo com um algoritmo genético para otimizar a configuração e operação de SAEs em rodovias. A abordagem é efetiva para apoiar decisões relacionadas ao planejamento e operação destes sistemas, por exemplo, em determinar o tamanho ideal para as áreas de cobertura de cada ambulância, de forma a minimizar o tempo médio de resposta aos usuários e o desbalanceamento das cargas de trabalho das ambulâncias. Os resultados computacionais desta abordagem foram analisados utilizando dados reais do sistema Anjos do Asfalto (rodovia Presidente Dutra.The hypercube model, well-known in the literature on problems of server-to-customer localization systems, is based on the spatially distributed queuing theory and Markovian analysis approximations. The model can be modified to analyze Emergency Medical Systems (EMSs on highways, considering the particularities of these systems' dispatching policies. In this study, we combine the hypercube model with a genetic algorithm to optimize the configuration and operation of EMSs on highways. This approach is effective to support planning and operation decisions, such as determining the ideal size of the area each ambulance should cover to minimize not only the average time of response to the user but also ambulance workload imbalances, as well as generating a Pareto efficient boundary between these measures. The computational results of this approach were analyzed using real data Anjos do Asfalto EMS (which covers the Presidente Dutra highway.

  12. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning; Otimizacao aplicada ao planejamento de politicas de testes em sistemas nucleares por enxame de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Newton Norat

    2006-12-15

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  13. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  14. Implementation in graphic form of an observability algorithm in energy network using sparse vectors; Implementacao, em ambiente grafico, de um algoritmo de observabilidade em redes de energia utilizando vetores esparsos

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Claudio Eduardo Scriptori de

    1996-02-01

    In the Operating Center of Electrical Energy System has been every time more and more important the understanding of the difficulties related to the electrical power behavior. In order to have adequate operation of the system the state estimation process is very important. However before performing the system state estimation owe needs to know if the system is observable otherwise the estimation will no be possible. This work has a main objective, to develop a software that allows one to visualize the whole network in case that network is observable or the observable island of the entire network. As theoretical background the theories and algorithm using the triangular factorization of gain matrix as well as the concepts contained on factorization path developed by Bretas et alli were used. The algorithm developed by him was adapted to the Windows graphical form so that the numerical results of the network observability are shown in the computer screen in graphical form. This algorithm is essentially instead of numerical as the one based on the factorization of gain matrix only. To implement the algorithm previously referred it was used the Borland C++ compiler for windows version 4.0 due to the facilities for sources generation it presents. The results of the tests in the network with 6, 14 and 30 bus leads to: (1) the simplification of observability analysis, using sparse vectors and triangular factorization of the gain matrix; (2) the behavior similarity of the three testes systems with effective clues that the routine developed works well for any systems mainly for systems with bigger quantities of bus and lines; (3) the alternative way of presenting numerical results using the program developed here in graphical forms. (author)

  15. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  16. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  17. Safety passive system in designing reduced scale of a PWR by genetic algorithm; Sistema de seguranca passiva em escala reduzida de um PWR projetado por algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Joao J. da; Alvim, Antonio Carlos M. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: jcunha@con.ufrj.br; alvim@con.ufrj.br; Lapa, Celso Marcelo Franklin [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Reatores. Programa de Pos-graduacao]. E-mail: lapa@ien.gov.br

    2005-07-01

    This paper presents the concept 'Designing by Genetic Algorithms (DbyGA)' applied on a new reduced scale system problem. The designing problem of a passive safety thermal-hydraulic system, considering dimensional and operational constraints, has been solved. Take account the passive safety characteristics of the last nuclear reactor generation, a PWR core under natural circulation is used in order to demonstrate the methodology applicability. The results revealed that some solutions (reduced scale system DbyGA) are capable of reproducing, both accurately and simultaneously, many of the physical phenomena that occur in real scale and operating conditions. However the results showed, in the non-trivial flow patterns simulation case study, some deficiencies in the DbyGA approach. These aspects revel important methodological possibilities to DbyGA performance improvement. (author)

  18. Development of parallel GPU based algorithms for problems in nuclear area; Desenvolvimento de algoritmos paralelos baseados em GPU para solucao de problemas na area nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Adino Americo Heimlich

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  19. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  20. An approach based on genetic algorithms with coding in real for the solution of a DC OPF to hydrothermal systems; Uma abordagem baseada em algoritmos geneticos com codificacao em real para a solucao de um FPO DC para sistemas hidrotermicos

    Energy Technology Data Exchange (ETDEWEB)

    Barbosa, Diego R.; Silva, Alessandro L. da; Luciano, Edson Jose Rezende; Nepomuceno, Leonardo [Universidade Estadual Paulista (UNESP), Bauru, SP (Brazil). Dept. de Engenharia Eletrica], Emails: diego_eng.eletricista@hotmail.com, alessandrolopessilva@uol.com.br, edson.joserl@uol.com.br, leo@feb.unesp.br

    2009-07-01

    Problems of DC Optimal Power Flow (OPF) have been solved by various conventional optimization methods. When the modeling of DC OPF involves discontinuous functions or not differentiable, the use of solution methods based on conventional optimization is often not possible because of the difficulty in calculating the gradient vectors at points of discontinuity/non-differentiability of these functions. This paper proposes a method for solving the DC OPF based on Genetic Algorithms (GA) with real coding. The proposed GA has specific genetic operators to improve the quality and viability of the solution. The results are analyzed for an IEEE test system, and its solutions are compared, when possible, with those obtained by a method of interior point primal-dual logarithmic barrier. The results highlight the robustness of the method and feasibility of obtaining the solution to real systems.

  1. Modelling and reflexions coefficient inversion of fractured media by using genetic algorithm; Modelagem e inversao de coeficientes de reflexao em meios fraturados usando algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Tinen, Julio Setsuo

    1998-12-01

    A method for the exact modeling and inversion of multi-azimuthal qP-wave reflection coefficients at an interface separating two anisotropic media with at least one of its planes of symmetry parallel to the interface, i.e., monoclinic or higher symmetries is presented. To illustrate the procedure, we compute qP-wave reflection coefficients at an interface separating an isotropic medium (representing a seal rock) from an anisotropic medium (representing a reservoir rock with vertical aligned fractures). Forward modeling of the qP reflection coefficients for all possible azimuths and angles of incidence suggests that amplitude versus offset (AVO) effects, combined with amplitude versus azimuth (AVA) effects, can be indicate of fracture density orientation. Particularly, the difference in the offset of the critical angles arrivals for different azimuths is proportional to the fracture density: the higher the fracture density, the larger the difference. A global optimization technique (genetic algorithm) to the invert wide angle (up to 45 degrees of incidence) AVO synthetic data, for three azimuths of the plane of the data acquisition was used. It was found that configuration is the minimum number of acquisition planes and the minimum distance of the far offset needed to invert AVO/AVA data is forty-five degrees. The model space consists of the mass density and five elastic parameters of a transversely isotropic medium with a horizontal symmetry axis, which represents the fractured reservoir rock. The parameter of the fractured rock are computed using real data from a oil reservoir. There are no prior information on the values of the model space parameters, except for reasonable value of wave velocities in crustal rocks and constraints of elastic stability of solid media. Mild anisotropy is also assumed, i.e, shear waves are slower than compressional waves for any direction or propagation and neither anomalous polarizations nor triplications occur. After inversion of

  2. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  3. EM Clustering Analysis of Diabetes Patients Basic Diagnosis Index

    OpenAIRE

    Wu, Cai; Steinbauer, Jeffrey R.; Kuo, Grace M.

    2005-01-01

    Cluster analysis can group similar instances into same group. Partitioning cluster assigns classes to samples without known the classes in advance. Most common algorithms are K-means and Expectation Maximization (EM). EM clustering algorithm can find number of distributions of generating data and build “mixture models”. It identifies groups that are either overlapping or varying sizes and shapes. In this project, by using EM in Machine Learning Algorithm in JAVA (WEKA) syste...

  4. Bayesian Estimation of Multidimensional Item Response Models. A Comparison of Analytic and Simulation Algorithms

    Science.gov (United States)

    Martin-Fernandez, Manuel; Revuelta, Javier

    2017-01-01

    This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…

  5. A new hybrid imperialist competitive algorithm on data clustering

    Indian Academy of Sciences (India)

    In this paper, we purpose a novel algorithm that is based on combining two algorithms of clustering; -means and Modify Imperialist Competitive Algorithm. It is named hybrid K-MICA. In addition, we use a method called modified expectation maximization (EM) to determine number of clusters. The experimented results ...

  6. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  7. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...

  8. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  9. Genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grefenstette, J.J.

    1994-12-31

    Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.

  10. Deterministic quantum annealing expectation-maximization algorithm

    Science.gov (United States)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  11. More on EM for ML Factor Analysis.

    Science.gov (United States)

    Rubin, Donald B.; Thayer, Dorothy T.

    1983-01-01

    The authors respond to a criticism of their earlier article concerning the use of the EM algorithm in maximum likelihood factor analysis. Also included are the comments made by the reviewers of this article. (JKS)

  12. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  13. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  14. Unsupervised Classification of SAR Images using Hierarchical Agglomeration and EM

    NARCIS (Netherlands)

    K. Kayabol (Koray); V. Krylov; J. Zerubia; E. Salerno; A.E. Cetin; O. Salvetti

    2012-01-01

    htmlabstractWe implement an unsupervised classification algorithm for high resolution Synthetic Aperture Radar (SAR) images. The foundation of algorithm is based on Classification Expectation-Maximization (CEM). To get rid of two drawbacks of EM type algorithms, namely the initialization and the

  15. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  16. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  17. Computer program for allocation of generators in isolated systems of direct current using genetic algorithm; Programa computacional para alocacao de geradores em sistemas isolados de corrente continua utilizando algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Gewehr, Diego N.; Vargas, Ricardo B.; Melo, Eduardo D. de; Paschoareli Junior, Dionizio [Universidade Estadual Paulista (DEE/UNESP), Ilha Solteira, SP (Brazil). Dept. de Engenharia Eletrica. Grupo de Pesquisa em Fontes Alternativas e Aproveitamento de Energia

    2008-07-01

    This paper presents a methodology for electric power sources location in isolated direct current micro grids, using genetic algorithm. In this work, photovoltaic panels are considered, although the methodology can be extended for any kind of DC sources. A computational tool is developed using the Matlab simulator, to obtain the best dc system configuration for reduction of panels quantity and costs, and to improve the system performance. (author)

  18. Use of genetic algorithms and virtual reality for the determination of the control cabin position in drilling rigs; O uso de algoritmos geneticos e realidade virtual na determinacao da posicao da cabine de controle em sondas de perfuracao

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Robson da Cunha [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Cunha, Gerson Gomes [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Civil

    2000-07-01

    This paper presents a proposal for a system simulation, verification and optimized positioning, for driller's cabin of drilling rigs. It will be presented assumptions and studies on drilling operations, equipment specification, characteristics of a standard worker (anthropometric measures) and application of virtual reality techniques. During the project phase, it will be verified the sight of operator to simulate the drilling operations for an 'optimum' positioning suggested by the system. Tests were performed in driller's cabins of existing platforms, identifying positions where the operator does not have a complete vision of operations. In certain cases, it was necessary that the operator left the cabin to make some verification of the operations, reducing the efficiency and functionality of the existing system, and allowing occurrence of accidents. Based on genetic algorithms, as well as in techniques of computational geometry, it was developed an algorithm that suggests the best position of the cabin on drill floor, taking into account quantitative and qualitative analyses. Through the virtual reality, is simulated the sight field of the operator inside the cabin allowing to verify the interference between the sight field of the operator and other existing elements in the platform and determining if the position initially suggested by the positioning algorithm is adequate aiming to improve the safety, productivity, efficiency of the system and reduction of costs. (author)

  19. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  20. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  1. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  2. Sequential and Adaptive Learning Algorithms for M-Estimation

    Directory of Open Access Journals (Sweden)

    Guang Deng

    2008-05-01

    Full Text Available The M-estimate of a linear observation model has many important engineering applications such as identifying a linear system under non-Gaussian noise. Batch algorithms based on the EM algorithm or the iterative reweighted least squares algorithm have been widely adopted. In recent years, several sequential algorithms have been proposed. In this paper, we propose a family of sequential algorithms based on the Bayesian formulation of the problem. The basic idea is that in each step we use a Gaussian approximation for the posterior and a quadratic approximation for the log-likelihood function. The maximum a posteriori (MAP estimation leads naturally to algorithms similar to the recursive least squares (RLSs algorithm. We discuss the quality of the estimate, issues related to the initialization and estimation of parameters, and robustness of the proposed algorithm. We then develop LMS-type algorithms by replacing the covariance matrix with a scaled identity matrix under the constraint that the determinant of the covariance matrix is preserved. We have proposed two LMS-type algorithms which are effective and low-cost replacement of RLS-type of algorithms working under Gaussian and impulsive noise, respectively. Numerical examples show that the performance of the proposed algorithms are very competitive to that of other recently published algorithms.

  3. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  4. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    . The key idea of the authors' method is to generate the huge system matrix only once, and store it using sparse matrix techniques. From the sparse matrix one can perform the matrix vector products very fast, which implies a major acceleration of the reconstruction algorithms. Here, the authors demonstrate...... that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  5. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  6. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  7. Ant colony algorithm for analysis of gene interaction in high-dimensional association data Algoritmo colônia de formigas para análise de interação gênica em dados de associação de alta dimensão

    Directory of Open Access Journals (Sweden)

    Romdhane Rekaya

    2009-07-01

    Full Text Available In recent years there has been much focus on the use of single nucleotide polymorphism (SNP fine genome mapping to identify causative mutations for traits of interest; however, many studies focus only on the marginal effects of markers, ignoring potential gene interactions. Simulation studies have show that this approach may not be powerful enough to detect important loci when gene interactions are present. While several studies have examined potential gene interaction, they tend to focus on a small number of SNP markers. Given the prohibitive computation cost of modeling interactions in studies involving a large number SNP, methods need to be develop that can account for potential gene interactions in a computationally efficient manner. This study adopts a machine learning approach by adapting the ant colony optimization algorithm (ACA, coupled with logistic regression on haplotypes and genotypes, for association studies involving large numbers of SNP markers. The proposed method is compared to haplotype analysis, implemented using a sliding window (SW/H, and single locus genotype association (RG. Each algorithm was evaluated using a binary trait simulated using an epistatic model and HapMap ENCODE genotype data. Results show that the ACA outperformed SW/H and RG under all simulation scenarios, yielding substantial increases in power to detect genomic regions associated with the simulated trait.Nos últimos anos muita atenção tem sido dada ao uso de polimorfismos de nucleotídeos simples (SNP para mapeamento fino do genoma, visando identificar mutações efetivas em características de interesse; todavia, muitos estudos focam apenas os efeitos marginais dos marcadores, ignorando as potenciais interações entre genes. Estudos de simulação tem mostrado que esta abordagem pode não ser poderosa o suficiente para detectar loci importantes quando interações entre genes estão presentes. Vários estudos tem examinado potenciais interações g

  8. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  9. PEG Enhancement for EM1 and EM2+ Missions

    Science.gov (United States)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG

  10. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  11. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  12. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  13. Digital Arithmetic: Division Algorithms

    DEFF Research Database (Denmark)

    Montuschi, Paolo; Nannarelli, Alberto

    2017-01-01

    implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...

  14. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  15. Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>

    Directory of Open Access Journals (Sweden)

    Dao-Yuan Zhang

    2012-06-01

    Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.

  16. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  17. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  18. Implementing Quantum Search Algorithm with Metamaterials.

    Science.gov (United States)

    Zhang, Weixuan; Cheng, Kaiyang; Wu, Chao; Wang, Yi; Li, Hongqiang; Zhang, Xiangdong

    2018-01-01

    Metamaterials, artificially structured electromagnetic (EM) materials, have enabled the realization of many unconventional EM properties not found in nature, such as negative refractive index, magnetic response, invisibility cloaking, and so on. Based on these man-made materials with novel EM properties, various devices are designed and realized. However, quantum analog devices based on metamaterials have not been achieved so far. Here, metamaterials are designed and printed to perform quantum search algorithm. The structures, comprising of an array of 2D subwavelength air holes with different radii perforated on the dielectric layer, are fabricated using a 3D-printing technique. When an incident wave enters in the designed metamaterials, the profile of beam wavefront is processed iteratively as it propagates through the metamaterial periodically. After ≈N roundtrips, precisely the same as the efficiency of quantum search algorithm, searched items will be found with the incident wave all focusing on the marked positions. Such a metamaterial-based quantum searching simulator may lead to remarkable achievements in wave-based signal processors. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  20. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  1. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  2. Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine

    Directory of Open Access Journals (Sweden)

    Xishi Tai

    2012-09-01

    Full Text Available A new trinuclear Cd (II complex [Cd3(L6(2,2-bipyridine3] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>212121 with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <emem>= <emem>= 90°, <em>V> em>= 11387(4 Å3, <em>Z> em>= 4, <em>Dc>= 1.416 μg·m−3, <emem>= 0.737 mm−1, <em>F> em>(000 = 4992, and final <em>R>1 = 0.0390, <em>ωR>2 = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N2O5 distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N2O4 distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH3OH solution also have been investigated.

  3. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...

  4. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  5. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.

  6. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  7. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  8. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...

  9. Introduction to Algorithms -14 ...

    Indian Academy of Sciences (India)

    As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...

  10. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  11. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  12. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  13. IDENTIFICACIÓN EFICIENTE DE ERRORES EN ESTIMACIÓN DE ESTADO USANDO UN ALGORITMO GENÉTICO ESPECIALIZADO IDENTIFICAÇÃO EFICAZ DOS ERROS EM ESTIMATIVA DE ESTADO USANDO UM ALGORITMO GENÉTICO ESPECIALIZADO EFFICIENT IDENTIFICATION OF ERRORS IN STATE ESTIMATION THROUGH A SPECIALIZED GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Hugo Andrés Ruiz

    2012-06-01

    Full Text Available En este artículo se presenta un método para resolver el problema de estimación de estado en sistemas eléctricos usando optimización combinatoria. Su objetivo es el estudio de mediciones con errores de difícil detección, que afectan el desempeño y calidad de los resultados cuando se emplea un estimador de estado clásico. Dada su complejidad matemática, se deducen indicadores de sensibilidad de la teoría de puntos de apalancamiento que se usan en el algoritmo de optimización de Chu-Beasley, con el fin de disminuir el esfuerzo computacional y mejorar la calidad de los resultados. El método propuesto se valida en un sistema IEEE de 30 nodos.Neste artigo apresenta-se um método para resolver o problema de estimativa de estado em sistemas elétricos usando otimização combinatória. Seu objetivo é o estudo de medidas com erros de difícil detecção, que afetam o desempenho e qualidade dos resultados quando se emprega um estimador de estado clássico. Dada sua complexidade matemática, deduzem-se indicadores de sensibilidade da teoria de pontos de alavancagem que se usam no algoritmo de otimização de Chu-Beasley, com o fim de diminuir o esforço computacional e melhorar a qualidade dos resultados. O método proposto se valida em um sistema IEEE de 30 nós.In this paper a method to solve the state estimation problem in electric systems applying combinatorial optimization is presented. Its objective is the study of measures with difficult detection errors, which affect the performance and quality of the results when a classic state estimator is used. Due to the mathematical complexity, sensibility indicators are deduced from the theory of leverage points used in the Chu-Beasley optimization algorithm with the purpose of reducing the computational effort and enhance the quality of the results. The proposed method is validated in a 30-node IEEE system.

  14. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  15. Molecular Cloning and Functional Analysis of Three <em>FLOWERING LOCUS T (FTem> Homologous Genes from Chinese <em>Cymbidium>

    Directory of Open Access Journals (Sweden)

    Jaime A. Teixeira da Silva

    2012-09-01

    Full Text Available The <em>FLOWERING LOCUS Tem> (<em>FT> gene plays crucial roles in regulating the transition from the vegetative to reproductive phase. To understand the molecular mechanism of reproduction, three homologous <em>FT> genes were isolated and characterized from <em>Cymbidium sinenseem> “Qi Jian Bai Mo”, <em>Cymbidium goeringiiem> and <em>Cymbidium ensifoliumem> “Jin Si Ma Wei”. The three genes<em> em>contained 618-bp nucleotides with a 531-bp open reading frame (ORF of encoding 176 amino acids (AAs. Alignment of the AA sequences revealed that CsFT, CgFT and CeFT contain a conserved domain, which is characteristic of the<em> em>PEBP-RKIP superfamily, and which share high identity with FT of other plants in GenBank: 94% with OnFT<em> em>from <em>Oncidium em>Gower Ramsey, 79% with Hd3a from <em>Oryza sativaem>, and 74% with FT from <em>Arabidopsis thalianaem>. qRT-PCR analysis showed a diurnal expression pattern of <em>CsFT>, <em>CgFT> and <em>CeFT> following both long day (LD, 16-h light/8-h dark and short day (SD, 8-h light/16-h dark treatment. While the transcripts of both <em>CsFT em>and <em>CeFT em>under LD were significantly higher than under SD, those of <em>CgFT em>were> em>higher under SD. Ectopic expression of <em>CgFT> in transgenic <em>Arabidopsis> plants resulted in early flowering compared to wild-type plants and significant up-regulation of <em>APETALA1em> (<em>AP1em> expression. Our data indicates that CgFT is a putative phosphatidylethanolamine-binding protein gene in <em>Cymbidium> that may regulate the vegetative to reproductive transition in flowers, similar to its <em>Arabidopsis> ortholog.

  16. Programação da produção em sistemas flow shop utilizando um método heurístico híbrido algoritmo genético-simulated annealing Production scheduling in flow shop systems by using a hybrid genetic algorithm-simulated annealing heuristic

    Directory of Open Access Journals (Sweden)

    Walther Rogério Buzzo

    2000-12-01

    Full Text Available Este artigo trata do problema de programação de tarefas flow shop permutacional. Diversos métodos heurísticos têm sido propostos para tal problema, sendo que um dos tipos de método consiste em melhorar soluções iniciais a partir de procedimentos de busca no espaço de soluções, tais como Algoritmo Genético (AG e Simulated Annealing (SA. Uma idéia interessante que tem despertado gradativa atenção refere-se ao desenvolvimento de métodos heurísticos híbridos utilizando Algoritmo Genético e Simulated Annealing. Assim, o objetivo é combinar as técnicas de tal forma que o procedimento resultante seja mais eficaz do que qualquer um dos seus componentes isoladamente. Neste artigo é apresentado um método heurístico híbrido Algoritmo Genético-Simulated Annealing para minimizar a duração total da programação flow shop permutacional. Com o propósito de avaliar a eficácia da hibridização, o método híbrido é comparado com métodos puros AG e SA. Os resultados obtidos a partir de uma experimentação computacional são apresentados.This paper deals with the Permutation Flow Shop Scheduling problem. Many heuristic methods have been proposed for this scheduling problem. A class of such heuristics finds a good solution by improving initial sequences for the jobs through search procedures on the solution space as Genetic Algorithm (GA and Simulated Annealing (SA. A promising approach for the problem is the formulation of hybrid metaheuristics by combining GA and SA techniques so that the consequent procedure is more effective than either pure GA or SA methods. In this paper we present a hybrid Genetic Algorithm-Simulated Annealing heuristic for the minimal makespan flow shop sequencing problem. In order to evaluate the effectiveness of the hybridization we compare the hybrid heuristic with both pure GA and SA heuristics. Results from computational experience are presented.

  17. <em>α>-Glucosidase Inhibitory Constituents from <em>Acanthopanax senticosusem> Harm Leaves

    Directory of Open Access Journals (Sweden)

    Hai-Xue Kuang

    2012-05-01

    Full Text Available A new triterpene glycoside, 3-<em>O-[(α>-L-rhamnopyranosyl(1→2]-[<em>β>-D-glucuronopyranosyl-6-<em>O>-methyl ester]-olean-12-ene-28-olic acid (1 and a new indole alkaloid, 5-methoxy-2-oxoindolin-3-acetic acid methyl ester (5 were isolated from the leaves of <em>Acanthopanax senticosusem> Harms along with six known compounds. The structures of the new compounds were determined by means of 2D-NMR experiments and chemical methods. All the isolated compounds were evaluated for their glycosidase inhibition activities and compound 6 showed significant <em>α>-glucosidase inhibition activity.

  18. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  19. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  20. War-Algorithm Accountability

    OpenAIRE

    Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.

    2016-01-01

    In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...

  1. Resultados da implantação de um algoritmo para terapia nutricional enteral em crianças e adolescentes com câncer Outcomes of the implementation of an enteral nutrition algorithm in children and adolescents with cancer

    Directory of Open Access Journals (Sweden)

    Adriana Garófolo

    2010-10-01

    industrializada reduziu o deficit nutricional, principalmente em desnutridos leves. Os resultados sugerem que a suplemento industrializado por sonda favoreceu a recuperação nutricional, principalmente com o uso mais prolongado.Objective This study aimed to describe the algorithm and the global results after its implementation. Methods This was a randomized clinical trial done with malnourished cancer patients. Follow-up followed an algorithm and patients with mild malnutrition were randomized to receive store-bought or homemade oral supplementation. The patients were reassessed after three, eight and twelve weeks. Depending on how the group supplemented with store-bought supplements responded, the supplementation was either continued orally, by tube-feeding or discontinued. The group receiving homemade supplementation either continued on it if the response was positive or received store-bought oral supplementation if the response was negative. The severely malnourished patients either received store-bought supplementation by feeding tube or orally, or it was discontinued if an adequate nutritional status was reached. The patients' responses to supplementation were determined by weight-for-height Z-scores, body mass indices, triceps skinfold thicknesses and circumferences. Results One hundred and seventeen out of 141 patients completed the first three weeks; 58 were severely malnourished and 59 were mildly malnourished. The nutritional status of 41% of the severely malnourished patients and 97% of the mildly malnourished patients receiving store-bought supplement orally improved. The nutritional status of 77% of the mildly malnourished patients receiving homemade supplement orally also improved. Of the 117 patients, 42 had to be tube-fed; of these, 23 accepted and 19 refused tube feeding and continued taking store-bought supplement orally. Consumption of store-bought supplement was higher in tube-fed patients than in orally-fed patients. Consumption also increased as orally

  2. Cloud model bat algorithm.

    Science.gov (United States)

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  3. Cloud Model Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-01-01

    Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  4. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  5. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  6. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  7. Static Analysis Numerical Algorithms

    Science.gov (United States)

    2016-04-01

    STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the

  8. Improved Chaff Solution Algorithm

    Science.gov (United States)

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement

  9. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  10. Image Segmentation Algorithms Overview

    OpenAIRE

    Yuheng, Song; Hao, Yan

    2017-01-01

    The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...

  11. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  12. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  13. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  14. HIGEDA: a hierarchical gene-set genetics based algorithm for finding subtle motifs in biological sequences.

    Science.gov (United States)

    Le, Thanh; Altman, Tom; Gardiner, Katheleen

    2010-02-01

    Identification of motifs in biological sequences is a challenging problem because such motifs are often short, degenerate, and may contain gaps. Most algorithms that have been developed for motif-finding use the expectation-maximization (EM) algorithm iteratively. Although EM algorithms can converge quickly, they depend strongly on initialization parameters and can converge to local sub-optimal solutions. In addition, they cannot generate gapped motifs. The effectiveness of EM algorithms in motif finding can be improved by incorporating methods that choose different sets of initial parameters to enable escape from local optima, and that allow gapped alignments within motif models. We have developed HIGEDA, an algorithm that uses the hierarchical gene-set genetic algorithm (HGA) with EM to initiate and search for the best parameters for the motif model. In addition, HIGEDA can identify gapped motifs using a position weight matrix and dynamic programming to generate an optimal gapped alignment of the motif model with sequences from the dataset. We show that HIGEDA outperforms MEME and other motif-finding algorithms on both DNA and protein sequences. Source code and test datasets are available for download at http://ouray.cudenver.edu/~tnle/, implemented in C++ and supported on Linux and MS Windows.

  15. Multilevel Analysis of Structural Equation Models via the EM Algorithm.

    Science.gov (United States)

    Jo, See-Heyon

    The question of how to analyze unbalanced hierarchical data generated from structural equation models has been a common problem for researchers and analysts. Among difficulties plaguing statistical modeling are estimation bias due to measurement error and the estimation of the effects of the individual's hierarchical social milieu. This paper…

  16. Nonparametric Item Response Function Estimates with the EM Algorithm.

    Science.gov (United States)

    Rossi, Natasha; Wang, Xiaohui; Ramsay, James O.

    2002-01-01

    Combined several developments in statistics and item response theory to develop a procedure for analysis of dichotomously scored test data. This version of nonparametric item response analysis, as illustrated through simulation and with data from other studies, marginalizes the role of the ability parameter theta. (SLD)

  17. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  18. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  19. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  20. Neonatal Phosphate Nutrition Alters <em>in em>Vivo> and <em>in em>Vitro> Satellite Cell Activity in Pigs

    Directory of Open Access Journals (Sweden)

    Chad H. Stahl

    2012-05-01

    Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO4 deficiency can alter satellite cell activity, however the role of neonatal PO4 nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO4 adequate (0.9% total P, supra-adequate (1.2% total P in PO4 requirement or deficient (0.7% total P in PO4 content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO4 nutrition alters their ability to proceed through their myogenic lineage. Dietary PO4 deficiency resulted in reduced (<em>P> < 0.05 sera PO4 and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO4 improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO4 adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO4 deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO4 nutrition for future lean growth potential is warranted.

  1. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  2. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  3. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  4. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  5. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  6. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  7. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  8. Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity

    Directory of Open Access Journals (Sweden)

    Guo-Feng Chen

    2012-08-01

    Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.

  9. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  10. Wireless communications algorithmic techniques

    CERN Document Server

    Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A

    2013-01-01

    This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi

  11. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  12. Adding large EM stack support

    KAUST Repository

    Holst, Glendon

    2016-12-01

    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.

  13. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  14. From Story to Algorithm.

    Science.gov (United States)

    Ball, Stanley

    1986-01-01

    Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)

  15. The Design of Algorithms.

    Science.gov (United States)

    Ferguson, David L.; Henderson, Peter B.

    1987-01-01

    Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…

  16. Improved Approximation Algorithm for

    NARCIS (Netherlands)

    Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz

    2014-01-01

    We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of

  17. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  18. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  19. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  20. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  1. Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.

  2. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  3. Algorithms for SCC Decomposition

    NARCIS (Netherlands)

    J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)

    2008-01-01

    htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in

  4. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  5. Fast autodidactic adaptive equalization algorithms

    Science.gov (United States)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  6. A MEDLINE categorization algorithm

    Directory of Open Access Journals (Sweden)

    Gehanno Jean-Francois

    2006-02-01

    Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources

  7. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  8. A MEDLINE categorization algorithm

    Science.gov (United States)

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  9. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  10. Dermatoses em renais cronicos em terapia dialitica

    Directory of Open Access Journals (Sweden)

    Luis Alberto Batista Peres

    2014-03-01

    Full Text Available Objetivo: As desordens cutâneas e das mucosas são comuns em pacientes em hemodiálise a longo prazo. A diálise prolonga a expectativa de vida, dando tempo para a manifestação destas anormalidades. Os objetivos deste estudo foram avaliar a prevalência de problemas dermatológicos em pacientes com doença renal crônica (DRC em hemodiálise. Métodos: Cento e quarenta e cinco pacientes com doença renal crônica em hemodiálise foram estudados. Todos os pacientes foram completamente analisados para as alterações cutâneas, de cabelos, mucosas e unhas por um único examinador e foram coletados dados de exames laboratoriais. Os dados foram armazenados em um banco de dados do Microsolft Excel e analisados por estatística descritiva. As variáveis contínuas foram comparadas pelo teste t de Student e as variáveis categóricas utilizando o teste do qui-quadrado ou o teste Exato de Fischer, conforme adequado. Resultados: O estudo incluiu 145 pacientes, com idade média de 53,6 ± 14,7 anos, predominantemente do sexo masculino (64,1% e caucasianos (90,0%. O tempo médio de diálise foi de 43,3 ± 42,3 meses. As principais doenças subjacentes foram: hipertensão arterial em 33,8%, diabetes mellitus em 29,6% e glomerulonefrite crônica em 13,1%. As principais manifestações dermatológicas observadas foram: xerose em 109 (75,2%, equimose em 87 (60,0%, prurido em 78 (53,8% e lentigo em 33 (22,8% pacientes. Conclusão: O nosso estudo mostrou a presença de mais do que uma dermatose por paciente. As alterações cutâneas são frequentes em pacientes em diálise. Mais estudos são necessários para melhor caracterização e manejo destas dermatoses.

  11. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  12. Algorithms for Global Positioning

    DEFF Research Database (Denmark)

    Borre, Kai; Strang, Gilbert

    and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...

  13. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  14. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  15. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  16. Fatigue Evaluation Algorithms: Review

    DEFF Research Database (Denmark)

    Passipoularidis, Vaggelis; Brøndsted, Povl

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...

  17. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  18. Likelihood Inflating Sampling Algorithm

    OpenAIRE

    Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.

    2016-01-01

    Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...

  19. Constrained Minimization Algorithms

    Science.gov (United States)

    Lantéri, H.; Theys, C.; Richard, C.

    2013-03-01

    In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.

  20. ALGORITHM OF OBJECT RECOGNITION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode

  1. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  2. NEUTRON ALGORITHM VERIFICATION TESTING

    Energy Technology Data Exchange (ETDEWEB)

    COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-07-19

    Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.

  3. The Hip Restoration Algorithm

    Science.gov (United States)

    Stubbs, Allston Julius; Atilla, Halis Atil

    2016-01-01

    Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734

  4. An efficient algorithm for function optimization: modified stem cells algorithm

    Science.gov (United States)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  5. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  6. Discrete algorithmic mathematics

    CERN Document Server

    Maurer, Stephen B

    2005-01-01

    The exposition is self-contained, complemented by diverse exercises and also accompanied by an introduction to mathematical reasoning … this book is an excellent textbook for a one-semester undergraduate course and it includes a lot of additional material to choose from.-EMS, March 2006In a textbook, it is necessary to select carefully the statements and difficulty of the problems … in this textbook, this is fully achieved … This review considers this book an excellent one.-The Mathematical Gazette, March 2006

  7. Iterative Algorithms for Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Yao Yonghong

    2008-01-01

    Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .

  8. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  9. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  10. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  11. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  12. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...

  13. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  14. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  15. EM International. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-01

    It is the intent of EM International to describe the Office of Environmental Restoration and Waste Management`s (EM`s) various roles and responsibilities within the international community. Cooperative agreements and programs, descriptions of projects and technologies, and synopses of visits to international sites are all highlighted in this semiannual journal. Focus on EM programs in this issue is on international collaboration in vitrification projects. Technology highlights covers: in situ sealing for contaminated sites; and remote sensors for toxic pollutants. Section on profiles of countries includes: Arctic contamination by the former Soviet Union, and EM activities with Germany--cooperative arrangements.

  16. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  17. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...

  18. Mitochondrial cytochrome <em>B> sequence divergence among Spanish, Alpine and Abruzzo chamois (genus <em>Rupicapra>

    Directory of Open Access Journals (Sweden)

    Nadia Mucci

    1998-12-01

    Full Text Available Abstract We have studied genetic divergence and phylogenetic relationships of Alpine, Spanish and Abruzzo chamois (genus <em>Rupicapra> by sequencing a region of 330 nucleotides within the mitochondrial DNA cytochrome <em>b> gene (mtDNA cyt <em>b>. These sequences were aligned with additional homologous sequences of Caprinae: Japanese serow, Chinese goral, Canadian mountain goat, Mishmi takin, muskox, Sardinian mouflon and domestic goat. Results suggest that, using representatives of the Bovini as outgroups, the Caprinae constitute a monophyletic clade. However, inferred phylogenetic relationships among and within tribes of Caprinae were poorly defined and did not reflect current evolutionary and taxonomical views. In fact, the Asian Rupicaprini goral and serow constituted a strongly supported clade, which included the muskox, while the takin grouped with <em>Ovis>. Therefore, the monophyly of Ovibovini was not supported by cyt <em>b> sequences. Species of <em>Rupicapra> joined a strongly supported monophyletic clade, which was distantly related to the Asian rupicaprins and <em>Oreamnos>. Therefore, the monophyly of the Rupicaprini was not supported by these cyt <em>b> sequences. There were sister species relationships within <em>Rupicapra>, Spanish and Alpine chamois and the Abruzzo chamois (<em>Rupicapra pyrenaica ornataem> was strictly related to the Spanish chamois (<em>Rupicapra pyrenaica parvaem>, as previously suggested by allozyme data and biogeographic reconstructions.

  19. New Trifluoromethyl Triazolopyrimidines as Anti-<em>Plasmodium> <em>falciparum> Agents

    Directory of Open Access Journals (Sweden)

    Núbia Boechat

    2012-07-01

    Full Text Available According to the World Health Organization, half of the World’s population, approximately 3.3 billion people, is at risk for developing malaria. Nearly 700,000 deaths each year are associated with the disease. Control of the disease in humans still relies on chemotherapy. Drug resistance is a limiting factor, and the search for new drugs is important. We have designed and synthesized new 2-(trifluoromethyl[1,2,4]triazolo[1,5-<em>a>]pyrimidine derivatives based on bioisosteric replacement of functional groups on the anti-malarial compounds mefloquine and amodiaquine. This approach enabled us to investigate the impact of: (i ring bioisosteric replacement; (ii a CF3 group substituted at the 2-position of the [1,2,4]triazolo[1,5-<em>a>]pyrimidine scaffold and (iii a range of amines as substituents at the 7-position of the of heterocyclic ring; on <em>in vitroem> activity against <em>Plasmodium falciparumem>. According to docking simulations, the synthesized compounds are able to interact with <em>P. falciparumem> dihydroorotate dehydrogenase (<em>Pf>DHODH through strong hydrogen bonds. The presence of a trifluoromethyl group at the 2-position of the [1,2,4]triazolo[1,5-<em>a>]pyrimidine ring led to increased drug activity. Thirteen compounds were found to be active, with IC50 values ranging from 0.023 to 20 µM in the anti-HRP2 and hypoxanthine assays. The selectivity index (SI of the most active derivatives 5, 8, 11 and 16 was found to vary from 1,003 to 18,478.

  20. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  1. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  2. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  3. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  4. Contour Error Map Algorithm

    Science.gov (United States)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  5. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  6. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  7. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  8. Comparison with reconstruction algorithms in magnetic induction tomography.

    Science.gov (United States)

    Han, Min; Cheng, Xiaolin; Xue, Yuyan

    2016-05-01

    Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image

  9. ALGORITMO ETAPA A ETAPA PARA LA SIMULACIÓN DE CASCADAS DE EXTRACCIÓN EN FASE LÍQUIDA APLICANDO EL MODELO DE EQUILIBRIO ALGORITMO ETAPA A ETAPA PARA A SIMULAÇÃO DE CASCATAS DE EXTRAÇÃO EM FASE LÍQUIDA APLICANDO O MODELO DE EQUILÍBRIO STAGE-BY-STAGE ALGORITHM FOR SIMULATION OF LIQUID PHASE EXTRACTION CASCADES APPLYING THE EQUILIBRIUM MODEL

    Directory of Open Access Journals (Sweden)

    César A Sánchez

    2009-12-01

    Full Text Available Se presenta un método de solución etapa a etapa para el conjunto de ecuaciones de balance de masa, relaciones de equilibrio, suma de composiciones y entalpía (MESH, mass equilibrium sum enthalpy que representan el modelo de equilibrio para un arreglo a contracorriente de etapas de extracción en fase líquida. El fundamento teórico se encuentra en la termodinámica: equilibrio líquido-líquido, flash isotérmico y flash adiabático. El algoritmo supera los alcances de los métodos gráficos e isotérmicos típicos en el estudio de los procesos de extracción y es aplicable a situaciones adicionales muy comunes: transferencia de calor en las etapas, etapas adiabáticas, temperaturas diferentes para los flujos de alimentación y solvente. El algoritmo se ilustra en tres ejemplos, los dos primeros en operación isotérmica con tres componentes (agua, ácido acético y acetato de butilo y diez etapas, y un tercero más elaborado que involucra transferencia de calor con cuatro componentes (agua, ácido acético, butanol y acetato de butilo y quince etapas.Apresenta-se um método de solução etapa a etapa para o conjunto de equações de balanço de massa, relações de equilíbrio, soma de composições e entalpia (MESH mass equilibrium sum enthalpy que representam o modelo de equilíbrio para um arranjo en contracorrente de períodos de extração em fase líquida. O fundamento teórico se encontra na termodinâmica: equilíbrio líquido-líquido, flash isotérmico e flash adiabático. O algoritmo supera os alcances dos métodos gráficos e isotérmicos típicos no estudo dos processos de extração e é aplicável a situações adicionais muito comuns: transferência de calor nas etapas, etapas adiabáticas, temperaturas diferentes para os fluxos de alimentação e solvente. O algoritmo se ilustra em três exemplos, os dois primeiros em operação isotérmica com três componentes (água, ácido acético e acetato de butila e dez etapas, e um

  10. Reactive Power Planning In Electical Systems Using The Benders Decomposition Technique And Branch And Bound Algorithm [planejamento De Fontes Reativas Em Sistemas De Energia Elétrica Utilizando A Técnica De Decomposição De Benders E O Algoritmo De Branch-and-bound

    OpenAIRE

    Mantovani J.R.S.; Scucuglia J.W.; Romero R.; Garcia A.V.

    2001-01-01

    This paper presents the Benders decomposition technique and Branch and Bound algorithm used in the reactive power planning in electric energy systems. The Benders decomposition separates the planning problem into two subproblems: an investment subproblem (master) and the operation subproblem (slave), which are solved alternately. The operation subproblem is solved using a successive linear programming (SLP) algorithm while the investment subproblem, which is an integer linear programming (ILP...

  11. A speedup technique for (l, d-motif finding algorithms

    Directory of Open Access Journals (Sweden)

    Dinh Hieu

    2011-03-01

    Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very

  12. Genotoxicity of <em>Euphorbia hirtaem>: An <em>Allium cepaem> Assay

    Directory of Open Access Journals (Sweden)

    Kwan Yuet Ping

    2012-06-01

    Full Text Available The potential genotoxic effects of methanolic extracts of <em>Euphorbia hirta em>which is commonly used in traditional medicine to treat a variety of diseased conditions including asthma, coughs, diarrhea and dysentery was investigated using <em>Allium cepaem> assay. The extracts of 125, 250, 500 and 1,000 µg/mL were tested on root meristems of <em>A. cepaem>. Ethylmethanesulfonate was used as positive control and distilled water was used as negative control. The result showed that mitotic index decreased as the concentrations of <em>E. hirtaem> extract increased. A dose-dependent increase of chromosome aberrations was also observed. Abnormalities scored were stickiness, c-mitosis, bridges and vagrant chromosomes. Micronucleated cells were also observed at interphase. Result of this study confirmed that the methanol extracts of <em>E. hirta em>exerted significant genotoxic and mitodepressive effects at 1,000 µg/mL.

  13. Applications of algorithmic differentiation to phase retrieval algorithms.

    Science.gov (United States)

    Jurling, Alden S; Fienup, James R

    2014-07-01

    In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.

  14. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  15. Similarity-regulation of OS-EM for accelerated SPECT reconstruction

    Science.gov (United States)

    Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.

    2016-06-01

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  16. Similarity-regulation of OS-EM for accelerated SPECT reconstruction.

    Science.gov (United States)

    Vaissier, P E B; Beekman, F J; Goorden, M C

    2016-06-07

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  17. Algorithms and their others: Algorithmic culture in context

    Directory of Open Access Journals (Sweden)

    Paul Dourish

    2016-08-01

    Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.

  18. Fighting Censorship with Algorithms

    Science.gov (United States)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  19. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  20. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  1. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  2. Genetic Algorithms in Noisy Environments

    OpenAIRE

    THEN, T. W.; CHONG, EDWIN K. P.

    1993-01-01

    Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...

  3. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  4. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  5. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  6. Fuzzy HRRN CPU Scheduling Algorithm

    OpenAIRE

    Bashir Alam; R. Biswas; M. Alam

    2011-01-01

    There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.

  7. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  8. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  9. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  10. Emergency Medical Service (EMS) Stations

    Data.gov (United States)

    Kansas Data Access and Support Center — EMS Locations in Kansas The EMS stations dataset consists of any location where emergency medical services (EMS) personnel are stationed or based out of, or where...

  11. Influence of Imputation and EM Methods on Factor Analysis When Item Nonresponse in Questionnaire Data Is Nonignorable.

    Science.gov (United States)

    Bernaards, Coen A.; Sijtsma, Klaas

    2000-01-01

    Using simulation, studied the influence of each of 12 imputation methods and 2 methods using the EM algorithm on the results of maximum likelihood factor analysis as compared with results from the complete data factor analysis (no missing scores). Discusses why EM methods recovered complete data factor loadings better than imputation methods. (SLD)

  12. Backtrack Orbit Search Algorithm

    Science.gov (United States)

    Knowles, K.; Swick, R.

    2002-12-01

    A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.

  13. Diagnostic algorithm for syncope.

    Science.gov (United States)

    Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon

    2014-09-01

    Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Toward an Algorithmic Pedagogy

    Directory of Open Access Journals (Sweden)

    Holly Willis

    2007-01-01

    Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?

  15. EM cluster analysis for categorical data

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2006-01-01

    Roč. 44, č. 4109 (2006), s. 640-648 ISSN 0302-9743. [Joint IAPR International Workshops SSPR 2006 and SPR 2006. Hong Kong , 17.08.2006-19.08.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : cluster analysis * categorical data * EM algorithm Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.402, year: 2005

  16. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...

  17. Echo Cancellation I: Algorithms Simulation

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2000-04-01

    Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.

  18. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  19. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  20. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  1. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  2. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...

  3. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  4. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  5. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  6. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  7. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  8. On exact algorithms for treewidth

    NARCIS (Netherlands)

    Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.

    2006-01-01

    We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a

  9. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  10. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  11. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  12. <em>Angiostrongylus vasorumem> in red foxes (<em>Vulpes vulpesem> and badgers (<em>Meles melesem> from Central and Northern Italy

    Directory of Open Access Journals (Sweden)

    Marta Magi

    2010-06-01

    Full Text Available Abstract During 2004-2005 and 2007-2008, 189 foxes (<em>Vulpes vulpesem> and 6 badgers (<em>Meles melesem> were collected in different areas of Central Northern Italy (Piedmont, Liguria and Tuscany and examined for <em>Angiostrongylus vasorumem> infection. The prevalence of the infection was significantly different in the areas considered, with the highest values in the district of Imperia (80%, Liguria and in Montezemolo (70%, southern Piedmont; the prevalence in Tuscany was 7%. One badger collected in the area of Imperia turned out to be infected, representing the first report of the parasite in this species in Italy. Further studies are needed to evaluate the role played by fox populations as reservoirs of infection and the probability of its spreading to domestic dogs.
    Riassunto <em>Angiostrongylus vasorumem> nella volpe (<em>Vulpes vulpesem> e nel tasso (<em>Meles melesem> in Italia centro-settentrionale. Nel 2004-2005 e 2007-2008, 189 volpi (<em>Vulpes vulpesem> e 6 tassi (<em>Meles melesem> provenienti da differenti aree dell'Italia settentrionale e centrale (Piemonte, Liguria Toscana, sono stati esaminati per la ricerca di <em>Angiostrongylus vasorumem>. La prevalenza del nematode è risultata significativamente diversa nelle varie zone, con valori elevati nelle zone di Imperia (80% e di Montezemolo (70%, provincia di Cuneo; la prevalenza in Toscana è risultata del 7%. Un tasso proveniente dall'area di Imperia è risultato positivo per A. vasorum; questa è la prima segnalazione del parassita in tale specie in Italia. Ulteriori studi sono necessari per valutare il potenziale della volpe come serbatoio e la possibilità di diffusione della parassitosi ai cani domestici.

    doi:10.4404/hystrix-20.2-4442

  13. International EMS Systems

    DEFF Research Database (Denmark)

    Langhelle, Audun; Lossius, Hans Morten; Silfvast, Tom

    2004-01-01

    Emergency medicine service (EMS) systems in the five Nordic countries have more similarities than differences. One similarity is the involvement of anaesthesiologists as pre-hospital physicians and their strong participation for all critically ill and injured patients in-hospital. Discrepancies do...... exist, however, especially within the ground and air ambulance service, and the EMS systems face several challenges. Main problems and challenges emphasized by the authors are: (1) Denmark: the dispatch centres are presently not under medical control and are without a national criteria based system...... is the only country that has emergency medicine (EM) as a recognised speciality but there is a need for more fully trained specialists in EM; (4) Norway: the ordinary ground ambulance is pointed out as the weakest link in the EM chain and a health reform demands extensive co-operation between the new health...

  14. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  15. A comparison of two estimation algorithms for Samejima's continuous IRT model.

    Science.gov (United States)

    Zopluoglu, Cengiz

    2013-03-01

    This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima's continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.

  16. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  17. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  18. Compound algorithm for restoration of heavy turbulence-degraded image for space target

    Science.gov (United States)

    Wang, Liang-liang; Wang, Ru-jie; Li, Ming; Kang, Zi-qian; Xu, Xiao-qin; Gao, Xin

    2012-11-01

    Restoration of atmospheric turbulence degraded image is needed to be solved as soon as possible in the field of astronomical space technology. Owing to the fact that the point spread function of turbulence is unknown, changeable with time, hard to be described by mathematics models, withal, kinds of noises would be brought during the imaging processes (such as sensor noise), the image for space target is edge blurred and heavy noised, which making a single restoration algorithm to reach the requirement of restoration difficult. Focusing the fact that the image for space target which was fetched during observation by ground-based optical telescopes is heavy noisy turbulence degraded, this paper discusses the adjustment and reformation of various algorithm structures as well as the selection of various parameters, after the combination of the nonlinear filter algorithm based on noise spatial characteristics, restoration algorithm of heavy turbulence degrade image for space target based on regularization, and the statistics theory based EM restoration algorithm. In order to test the validity of the algorithm, a series of restoration experiments are performed on the heavy noisy turbulence-degraded images for space target. The experiment results show that the new compound algorithm can achieve noise restriction and detail preservation simultaneously, which is effective and practical. Withal, the definition measures and relative definition measures show that the new compound algorithm is better than the traditional algorithms.

  19. Bayesian missing data problems EM, data augmentation and noniterative computation

    CERN Document Server

    Tan, Ming T; Ng, Kai Wang

    2009-01-01

    Bayesian Missing Data Problems: EM, Data Augmentation and Noniterative Computation presents solutions to missing data problems through explicit or noniterative sampling calculation of Bayesian posteriors. The methods are based on the inverse Bayes formulae discovered by one of the author in 1995. Applying the Bayesian approach to important real-world problems, the authors focus on exact numerical solutions, a conditional sampling approach via data augmentation, and a noniterative sampling approach via EM-type algorithms. After introducing the missing data problems, Bayesian approach, and poste

  20. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  1. The Forward-Reverse Algorithm for Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2015-01-07

    In this work, we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which we solve a set of deterministic optimization problems where the SRNs are replaced by the classical ODE rates; then, during the second phase, the Monte Carlo version of the EM algorithm is applied starting from the output of the previous phase. Starting from a set of over-dispersed seeds, the output of our two-phase method is a cluster of maximum likelihood estimates obtained by using convergence assessment techniques from the theory of Markov chain Monte Carlo.

  2. An Affinity Propagation-Based DNA Motif Discovery Algorithm

    Directory of Open Access Journals (Sweden)

    Chunxiao Sun

    2015-01-01

    Full Text Available The planted (l,d motif search (PMS is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.

  3. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...

  4. Complex networks an algorithmic perspective

    CERN Document Server

    Erciyes, Kayhan

    2014-01-01

    Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r

  5. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  6. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  7. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  8. Algorithms Design Techniques and Analysis

    CERN Document Server

    Alsuwaiyel, M H

    1999-01-01

    Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi

  9. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  10. O ator em jogo

    OpenAIRE

    Daves Otani

    2005-01-01

    Resumo: O Ator em Jogo é uma reflexão pessoal sobre aspectos do processo criativo em dois espetáculos teatrais encenados pelo grupo Boa Companhia: PRIMUS (adaptado do conto "Comunicado a uma Academia", de Franz Kafka) e MISTER K. E OS ARTISTAS DA FOME (adaptado do conto "O artista da fome", de Kafka). Utilizando a experiência prática da capoeira, que serviu como matriz criativa nessas duas montagens, em cada uma a seu modo específico, faço uma reflexão a partir de um diário que relata experiê...

  11. International EMS Systems

    DEFF Research Database (Denmark)

    Langhelle, Audun; Lossius, Hans Morten; Silfvast, Tom

    2004-01-01

    exist, however, especially within the ground and air ambulance service, and the EMS systems face several challenges. Main problems and challenges emphasized by the authors are: (1) Denmark: the dispatch centres are presently not under medical control and are without a national criteria based system...... enterprises to re-establish a nation-wide air ambulance service; (5) Sweden: to create evidence based medicine standards for treatment in emergency medicine, a better integration of all part of the chain of survival, a formalised education in EM and a nation wide physician staffed helicopter EMS (HEMS) cover....

  12. International EMS Systems

    DEFF Research Database (Denmark)

    Langhelle, Audun; Lossius, Hans Morten; Silfvast, Tom

    2004-01-01

    . Access to on-line medical advice of a physician is not available; (2) Finland: the autonomy of the individual municipalities and their responsibility to cover for primary and specialised health care, as well as the EMS, and the lack of supporting or demanding legislation regarding the EMS; (3) Iceland...... exist, however, especially within the ground and air ambulance service, and the EMS systems face several challenges. Main problems and challenges emphasized by the authors are: (1) Denmark: the dispatch centres are presently not under medical control and are without a national criteria based system...

  13. Expression of Selected <em>Ginkgo em>>biloba em>Heat Shock Protein Genes After Cold Treatment Could Be Induced by Other Abiotic Stress

    Directory of Open Access Journals (Sweden)

    Feng Xu

    2012-05-01

    Full Text Available Heat shock proteins (HSPs play various stress-protective roles in plants. In this study, three <em>HSP> genes were isolated from a suppression subtractive hybridization (SSH cDNA library of <em>Ginkgo bilobaem> leaves treated with cold stress. Based on the molecular weight, the three genes were designated <em>GbHSP16.8em>, <em>GbHSP17em> and <em>GbHSP70em>. The full length of the three genes were predicted to encode three polypeptide chains containing 149 amino acids (Aa, 152 Aa, and 657 Aa, and their corresponding molecular weights were predicted as follows: 16.67 kDa, 17.39 kDa, and 71.81 kDa respectively. The three genes exhibited distinctive expression patterns in different organs or development stages. <em>GbHSP16.8em> and <em>GbHSP70em> showed high expression levels in leaves and a low level in gynoecia, <em>GbHSP17em> showed a higher transcription in stamens and lower level in fruit. This result indicates that <em>GbHSP16.8em> and <em>GbHSP70 em>may play important roles in <em>Ginkgo> leaf development and photosynthesis, and <em>GbHSP17em> may play a positive role in pollen maturation. All three <em>GbHSPs> were up-regulated under cold stress, whereas extreme heat stress only caused up-regulation of <em>GbHSP70em>, UV-B treatment resulted in up-regulation of <em>GbHSP16.8em> and <em>GbHSP17em>, wounding treatment resulted in up-regulation of <em>GbHSP16.8em> and <em>GbHSP70em>, and abscisic acid (ABA treatment caused up-regulation of <em>GbHSP70em> primarily.

  14. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  15. Optimum energy re-establishment in distribution systems: a comparison between the search performance using fuzzy heuristics and genetic algorithms; Restabelecimento otimo de energia em sistemas de distribuicao: uma comparacao entre o desempenho de busca com heuristica fuzzy e algoritmos geneticos

    Energy Technology Data Exchange (ETDEWEB)

    Delbem, Alexandre C.B.; Bretas, Newton G. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Dept. de Engenharia Eletrica; Carvalho, Andre C.P.L.F. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Dept. de Ciencias de Computacao e Estatistica

    1996-11-01

    A search approach using fuzzy heuristics and a neural network parameter was developed for service restoration of a distribution system. The goal was to restore energy for an un-faulted zone after a fault had been identified and isolated. The restoration plan must be carried out in a very short period. However, the combinatorial feature of the problem constrained the application of automatic energy restoration planners. To overcome this problem, an heuristic search approach using fuzzy heuristics was proposed. As a result, a genetic algorithm approach was developed to achieve the optimal energy restoration plan. The effectiveness of these approaches were tested in a simplified distribution system based on the complex distribution system of Sao Carlos city, Sao Paulo State - southeast Brazil. It was noticed that the genetic algorithm provided better performance than the fuzzy heuristic search in this problem. 11 refs., 10 figs.

  16. Adaptive Maneuvering Target Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Chunling Wu

    2014-07-01

    Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.

  17. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  18. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  19. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  20. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  1. Efficient Algorithms for Subgraph Listing

    Directory of Open Access Journals (Sweden)

    Niklas Zechner

    2014-05-01

    Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.

  2. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  3. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty, Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  4. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  5. When the greedy algorithm fails

    OpenAIRE

    Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders

    2004-01-01

    We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...

  6. A* Algorithm for Graphics Processors

    OpenAIRE

    Inam, Rafia; Cederman, Daniel; Tsigas, Philippas

    2010-01-01

    Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...

  7. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  8. 3D parallel inversion of time-domain airborne EM data

    Science.gov (United States)

    Liu, Yun-He; Yin, Chang-Chun; Ren, Xiu-Yan; Qiu, Chang-Kai

    2016-12-01

    To improve the inversion accuracy of time-domain airborne electromagnetic data, we propose a parallel 3D inversion algorithm for airborne EM data based on the direct Gauss-Newton optimization. Forward modeling is performed in the frequency domain based on the scattered secondary electrical field. Then, the inverse Fourier transform and convolution of the transmitting waveform are used to calculate the EM responses and the sensitivity matrix in the time domain for arbitrary transmitting waves. To optimize the computational time and memory requirements, we use the EM "footprint" concept to reduce the model size and obtain the sparse sensitivity matrix. To improve the 3D inversion, we use the OpenMP library and parallel computing. We test the proposed 3D parallel inversion code using two synthetic datasets and a field dataset. The time-domain airborne EM inversion results suggest that the proposed algorithm is effective, efficient, and practical.

  9. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  10. Rotational Invariant Dimensionality Reduction Algorithms.

    Science.gov (United States)

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2017-11-01

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.

  11. Artificial Flora (AF Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Long Cheng

    2018-02-01

    Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.

  12. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  13. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  14. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  15. <em>In Vitro em>and <em>in em>Vivo> Antitumor Effect of Trachylobane-360, a Diterpene from<em> Xylopia langsdorffianaem>

    Directory of Open Access Journals (Sweden)

    João Carlos Lima Rodrigues Pita

    2012-08-01

    Full Text Available Trachylobane-360 (<em>ent>-7α-acetoxytrachyloban-18-oic acid was isolated from <em>Xylopia langsdorffianaem>. Studies have shown that it has weak cytotoxic activity against tumor and non-tumor cells. This study investigated the <em>in vitroem>> em>and <em>in vivoem> antitumor effects of trachylobane-360, as well as its cytotoxicity in mouse erythrocytes. In order to evaluate the <em>in vivoem> toxicological aspects related to trachylobane-360 administration, hematological, biochemical and histopathological analyses of the treated animals were performed. The compound exhibited a concentration-dependent effect in inducing hemolysis with HC50 of 273.6 µM, and a moderate <em>in vitroem>> em>concentration-dependent inhibitory effect on the proliferation of sarcoma 180 cells with IC50 values of 150.8 µM and 150.4 µM, evaluated by the trypan blue exclusion test and MTT reduction assay, respectively. The <em>in vivoem> inhibition rates of sarcoma 180 tumor development were 45.60, 71.99 and 80.06% at doses of 12.5 and 25 mg/kg of trachylobane-360 and 25 mg/kg of 5-FU, respectively. Biochemical parameters were not altered. Leukopenia was observed after 5-FU treatment, but this effect was not seen with trachylobane-360 treatment. The histopathological analysis of liver and kidney showed that both organs were mildly affected by trachylobane-360 treatment. Trachylobane-360 showed no immunosuppressive effect. In conclusion, these data reinforce the anticancer potential of this natural diterpene.

  16. Chemical Composition and Insecticidal Activity Against <em>Sitophilus zeamaisem> of the Essential Oils Derived from <em>Artemisia giraldiiem> and <em>Artemisia subdigitataem>

    Directory of Open Access Journals (Sweden)

    Zhi-Long Liu

    2012-06-01

    Full Text Available The aim of this research was to determine the chemical composition and insecticidal activity of the essential oils derived from flowering aerial parts of <em>Artemisia giraldii em>Pamp. and <em>A. subdigitataem> Mattf. (Family: Asteraceae against the maize weevil (<em>Sitophilus zeamaisem> Motsch.. Essential oils of aerial parts of <em>A. giraldiiem> and <em>A. subdigitataem> were obtained from hydrodistillation and investigated by GC and GC-MS. A total of 48 and 33 components of the essential oils of <em>A. giraldiiem> and <em>A. subdigitataem> were identified, respectively. The principal compounds in <em>A. giraldiiem> essential oil were β-pinene (13.18%, <em>iso>-elemicin (10.08%, germacrene D (5.68%, 4-terpineol (5.43% and (<em>Z>-β-ocimene (5.06%. 1,8-Cineole (12.26% and α-curcumene (10.77% were the two main components of the essential oil of <em>A. subdigitataem>, followed by β-pinene (7.38%, borneol (6.23% and eugenol (5.87%. The essential oils of <em>A. giraldiiem> and <em>A. subdigitataem> possessed fumigant toxicity against the maize weevils with LC50 values of 6.29 and 17.01 mg/L air, respectively. The two essential oils of <em>A. giraldiiem> and <em>A. subdigitataem> also exhibited contact toxicity against <em>S. zeamaisem> adults with LD50 values of 40.51 and 76.34 µg/adult, respectively. The results indicated that the two essential oils show potential in terms of fumigant and contact toxicity against grain storage insects.

  17. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  18. Mathematical algorithms for approximate reasoning

    Science.gov (United States)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  19. Purification, Characterization and Antioxidant Activities <em>in Vitroem>> em>and <em>in Vivoem> of the Polysaccharides from <em>Boletus edulisem> Bull

    Directory of Open Access Journals (Sweden)

    Yijun Fan

    2012-07-01

    Full Text Available A water-soluble polysaccharide (BEBP was extracted from <em>Boletus edulis em>Bull using hot water extraction followed by ethanol precipitation. The polysaccharide BEBP was further purified by chromatography on a DEAE-cellulose column, giving three major polysaccharide fractions termed BEBP-1, BEBP-2 and BEBP-3. In the next experiment, the average molecular weight (Mw, IR and monosaccharide compositional analysis of the three polysaccharide fractions were determined. The evaluation of antioxidant activities both <em>in vitroem> and <em>in vivo em>suggested that BEBP-3 had good potential antioxidant activity, and should be explored as a novel potential antioxidant.

  20. Methyl 2-Benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate

    Directory of Open Access Journals (Sweden)

    Alami Anouar

    2012-09-01

    Full Text Available The heterocyclic carboxylic α-aminoester methyl 2-benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate is obtained by <em>O>-alkylation of methyl α-azido glycinate <em>N>-benzoylated with 1<em>H>-benzimidazol-1-ylmethanol.

  1. Sulla presenza di <em>Sorex antinoriiem>, <em>Neomys anomalusem> (Insectivora, Soricidae e <em>Talpa caecaem> (Insectivora, Talpidae in Umbria

    Directory of Open Access Journals (Sweden)

    A.M. Paci

    2003-10-01

    Full Text Available Lo scopo del contributo è di fornire un aggiornamento sulla presenza del Toporagno del Vallese <em>Sorex antinoriiem>, del Toporagno acquatico di Miller <em>Neomys anomalusem> e della Talpa cieca <em>Talpa caecaem> in Umbria, dove le specie risultano accertate ormai da qualche anno. A tal fine sono stati rivisitati i reperti collezionati e la bibliografia conosciuta. Toporagno del Vallese: elevato di recente a livello di specie da Brünner et al. (2002, altrimenti considerato sottospecie del Toporagno comune (<em>S. araneus antinoriiem>. È conservato uno di tre crani incompleti (mancano mandibole ed incisivi superiori al momento prudenzialmente riferiti a <em>Sorex> cfr. <em>antinorii>, provenienti dall?Appennino umbro-marchigiano settentrionale (dintorni di Scalocchio - PG, 590 m. s.l.m. e determinati sulla base della pigmentazione rossa degli ipoconi del M1 e M2; Toporagno acquatico di Miller: tre crani (Breda in Paci e Romano op. cit. e un esemplare intero (Paci, ined. sono stati trovati a pochi chilometri di distanza gli uni dall?altro tra i comuni di Assisi e Valfabbrica, in ambienti mediocollinari limitrofi al Parco Regionale del M.te Subasio (Perugia. In provincia di Terni la specie viene segnalata da Isotti (op. cit. per i dintorni di Orvieto. Talpa cieca: sono noti una femmina e un maschio raccolti nel comune di Pietralunga (PG, rispettivamente in una conifereta a <em>Pinus nigraem> (m. 630 s.l.m. e nelle vicinanze di un bosco misto collinare a prevalenza di <em>Quercus cerrisem> (m. 640 s.l.m.. Recentemente un terzo individuo è stato rinvenuto nel comune di Sigillo (PG, all?interno del Parco Regionale di M.te Cucco, sul margine di una faggeta a 1100 m s.l.m. In entrambi i casi l?areale della specie è risultato parapatrico con quello di <em>Talpa europaeaem>.

  2. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  3. 3D reconstruction of synapses with deep learning based on EM Images

    Science.gov (United States)

    Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei

    2017-03-01

    Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.

  4. Glycosylation of Vanillin and 8-Nordihydrocapsaicin by Cultured <em>Eucalyptus perrinianaem> Cells

    Directory of Open Access Journals (Sweden)

    Naoji Kubota

    2012-05-01

    Full Text Available Glycosylation of vanilloids such as vanillin and 8-nordihydrocapsaicin by cultured plant cells of <em>Eucalyptus perrinianaem> was studied. Vanillin was converted into vanillin 4-<em>O>-b-D-glucopyranoside, vanillyl alcohol, and 4-<em>O>-b-D-glucopyranosylvanillyl alcohol by <em>E. perriniana em>cells. Incubation of cultured <em>E. perrinianaem> cells with 8-nor- dihydrocapsaicin gave 8-nordihydrocapsaicin 4-<em>O>-b-D-glucopyranoside and 8-nordihydro- capsaicin 4-<em>O>-b-D-gentiobioside.

  5. Algorithms, complexity, and the sciences.

    Science.gov (United States)

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  6. SDR Input Power Estimation Algorithms

    Science.gov (United States)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  7. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  8. The Study of Address Tree Coding Based on the Maximum Matching Algorithm in Courier Business

    Science.gov (United States)

    Zhou, Shumin; Tang, Bin; Li, Wen

    As an important component of EMS monitoring system, address is different from user name with great uncertainty because there are many ways to represent it. Therefore, address standardization is a difficult task. Address tree coding has been trying to resolve that issue for many years. Zip code, as its most widely used algorithm, can only subdivide the address down to a designated post office, not the recipients' address. This problem needs artificial identification method to be accurately delivered. This paper puts forward a new encoding algorithm of the address tree - the maximum matching algorithm to solve the problem. This algorithm combines the characteristics of the address tree and the best matching theory, and brings in the associated layers of tree nodes to improve the matching efficiency. Taking the variability of address into account, the thesaurus of address tree should be updated timely by increasing new nodes automatically through intelligent tools.

  9. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  10. Optimization of Filter by using Support Vector Regression Machine with Cuckoo Search Algorithm

    OpenAIRE

    İlarslan, M.; Demirel, S.; Torpi, H.; Keskin, A. K.; Çağlar, M. F.

    2014-01-01

    Herein, a new methodology using a 3D Electromagnetic (EM) simulator-based Support Vector Regression Machine (SVRM) models of base elements is presented for band-pass filter (BPF) design. SVRM models of elements, which are as fast as analytical equations and as accurate as a 3D EM simulator, are employed in a simple and efficient Cuckoo Search Algorithm (CSA) to optimize an ultra-wideband (UWB) microstrip BPF. CSA performance is verified by comparing it with other Meta-Heuristics such as Genet...

  11. Study of the <em>in Vitroem> Antiplasmodial, Antileishmanial and Antitrypanosomal Activities of Medicinal Plants from Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Nawal M. Al-Musayeib

    2012-09-01

    Full Text Available The present study investigated the <em>in vitroem> antiprotozoal activity of sixteen selected medicinal plants. Plant materials were extracted with methanol and screened <em>in vitroem> against erythrocytic schizonts of <em>Plasmodium falciparumem>, intracellular amastigotes of <em>Leishmania infantum em>and <em>Trypanosoma cruzi em>and free trypomastigotes of<em> T. bruceiem>. Cytotoxic activity was determined against MRC-5 cells to assess selectivity<em>. em>The criterion for activity was an IC50 < 10 µg/mL (4. Antiplasmodial activity was found in the<em> em>extracts of<em> em>>Prosopis julifloraem> and <em>Punica granatumem>. Antileishmanial activity<em> em>against <em>L. infantumem> was demonstrated in <em>Caralluma sinaicaem> and <em>Periploca aphylla.em> Amastigotes of<em> T. cruzi em>were affected by the methanol extract of<em> em>>Albizia lebbeckem>> em>pericarp, <em>Caralluma sinaicaem>,> Periploca aphylla em>and <em>Prosopius julifloraem>. Activity against<em> T. brucei em>was obtained in<em> em>>Prosopis julifloraem>. Cytotoxicity (MRC-5 IC50 < 10 µg/mL and hence non-specific activities were observed for<em> em>>Conocarpus lancifoliusem>.>

  12. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  13. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  14. Algorithms and Public Service Media

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator...... and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a practical level, the introduction of the systems shifts power within the organisations and changes...... the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....

  15. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  16. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  17. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  18. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  19. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  20. Retratos em movimento.

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Oliveira Junior

    Full Text Available resumo O artigo aborda aspectos da relação do cinema com a arte do retrato. Buscamos, em primeiro lugar, uma definição estética do que seria um retrato cinematográfico, sempre em tensão com os critérios formais e padrões estilísticos que historicamente constituíram o retrato pictórico. Em seguida, relacionamos essa questão com a importância que se deu à representação do close-up de rosto nas primeiras décadas do cinema, quando foi atribuído aos filmes um papel inédito no estudo da fisionomia e da expressão facial. Por fim, apresentamos exemplos de autorretratos na pintura e no cinema para expor a forma como a autorrepresentação põe em crise as noções de subjetividade e identidade em que a definição clássica do retrato se apoiava.

  1. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  2. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  3. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  4. Deterministic algorithms for multi-criteria TSP

    NARCIS (Netherlands)

    Manthey, Bodo; Ogihara, Mitsunori; Tarui, Jun

    2011-01-01

    We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms. First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  5. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  6. AN ALGORITHM FOR AN ALGORITHM FOR THE DESIGN THE ...

    African Journals Online (AJOL)

    eobe

    focuses on the development of an algorithm for designing an axial flow compressor for designing an axial flow compressor for designing an axial flow compressor for a power generation gas turbine generation gas turbine and attempt and attempt and attempts to bring to the public domain some parameters regarded as.

  7. Big Data Mining: Tools & Algorithms

    Directory of Open Access Journals (Sweden)

    Adeel Shiraz Hashmi

    2016-03-01

    Full Text Available We are now in Big Data era, and there is a growing demand for tools which can process and analyze it. Big data analytics deals with extracting valuable information from that complex data which can’t be handled by traditional data mining tools. This paper surveys the available tools which can handle large volumes of data as well as evolving data streams. The data mining tools and algorithms which can handle big data have also been summarized, and one of the tools has been used for mining of large datasets using distributed algorithms.

  8. CATEGORIES OF COMPUTER SYSTEMS ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. V. Poltavskiy

    2015-01-01

    Full Text Available Philosophy as a frame of reference on world around and as the first science is a fundamental basis, "roots" (R. Descartes for all branches of the scientific knowledge accumulated and applied in all fields of activity of a human being person. The theory of algorithms as one of the fundamental sections of mathematics, is also based on researches of the gnoseology conducting cognition of a true picture of the world of the buman being. From gnoseology and ontology positions as fundamental sections of philosophy modern innovative projects are inconceivable without development of programs,and algorithms.

  9. Industrial Applications of Evolutionary Algorithms

    CERN Document Server

    Sanchez, Ernesto; Tonda, Alberto

    2012-01-01

    This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the

  10. Wavelets theory, algorithms, and applications

    CERN Document Server

    Montefusco, Laura

    2014-01-01

    Wavelets: Theory, Algorithms, and Applications is the fifth volume in the highly respected series, WAVELET ANALYSIS AND ITS APPLICATIONS. This volume shows why wavelet analysis has become a tool of choice infields ranging from image compression, to signal detection and analysis in electrical engineering and geophysics, to analysis of turbulent or intermittent processes. The 28 papers comprising this volume are organized into seven subject areas: multiresolution analysis, wavelet transforms, tools for time-frequency analysis, wavelets and fractals, numerical methods and algorithms, and applicat

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  13. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    We here study some problems concerned with the computational analysis of finite partially ordered sets. We begin (in § 1) by showing that the matrix representation of a binary relationR may always be taken in triangular form ifR is a partial ordering. We consider (in § 2) the chain structure...... in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi...

  14. Deceptiveness and genetic algorithm dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Liepins, G.E. (Oak Ridge National Lab., TN (USA)); Vose, M.D. (Tennessee Univ., Knoxville, TN (USA))

    1990-01-01

    We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.

  15. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....

  16. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asyncronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....

  17. Performance Evaluation of A* Algorithms

    OpenAIRE

    Martell, Victor; Sandberg, Aron

    2016-01-01

    Context. There have been a lot of progress made in the field of pathfinding. One of the most used algorithms is A*, which over the years has had a lot of variations. There have been a number of papers written about the variations of A* and in what way they specifically improve A*. However, few papers have been written comparing A* with several different variations of A*. Objectives. The objectives of this thesis is to find how Dijkstra's algorithm, IDA*, Theta* and HPA* compare against A* bas...

  18. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  19. An Efficient Uplink Scheduling Algorithm with Variable Grant-Interval for VoIP Service in BWA Systems

    Science.gov (United States)

    Oh, Sung-Min; Cho, Sunghyun; Kim, Jae-Hyun; Kwun, Jonghyung

    This letter proposes an efficient uplink scheduling algorithm for the voice over Internet protocol (VoIP) service with variable frame-duration according to the voice activity in IEEE 802.16e/m systems. The proposed algorithm dynamically changes the grant-interval to save the uplink bandwidth, and it uses the random access scheme when the voice activity changes from silent-period to talk-spurt. Numerical results show that the proposed algorithm can increase the VoIP capacity by 26 percent compared to the conventional extended real-time polling service (ertPS).

  20. Analysis and Improvement of Fireworks Algorithm

    OpenAIRE

    Xi-Guang Li; Shou-Fei Han; Chang-Qing Gong

    2017-01-01

    The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opp...

  1. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  2. Natural Products from Antarctic Colonial Ascidians of the Genera <em>Aplidium> and <em>Synoicum>: Variability and Defensive Role

    Directory of Open Access Journals (Sweden)

    Conxita Avila

    2012-08-01

    Full Text Available Ascidians have developed multiple defensive strategies mostly related to physical, nutritional or chemical properties of the tunic. One of such is chemical defense based on secondary metabolites. We analyzed a series of colonial Antarctic ascidians from deep-water collections belonging to the genera <em>Aplidium> and <em>Synoicum> to evaluate the incidence of organic deterrents and their variability. The ether fractions from 15 samples including specimens of the species <em>A.> <em>falklandicum>, <em>A.> <em>fuegiense>, <em>A.> <em>meridianum>, <em>A.> <em>millari> and <em>S.> <em>adareanum> were subjected to feeding assays towards two relevant sympatric predators: the starfish <em>Odontaster> <em>validus>, and the amphipod <em>Cheirimedon> <em>femoratus>. All samples revealed repellency. Nonetheless, some colonies concentrated defensive chemicals in internal body-regions rather than in the tunic. Four ascidian-derived meroterpenoids, rossinones B and the three derivatives 2,3-epoxy-rossinone B, 3-epi-rossinone B, 5,6-epoxy-rossinone B, and the indole alkaloids meridianins A–G, along with other minoritary meridianin compounds were isolated from several samples. Some purified metabolites were tested in feeding assays exhibiting potent unpalatabilities, thus revealing their role in predation avoidance. Ascidian extracts and purified compound-fractions were further assessed in antibacterial tests against a marine Antarctic bacterium. Only the meridianins showed inhibition activity, demonstrating a multifunctional defensive role. According to their occurrence in nature and within our colonial specimens, the possible origin of both types of metabolites is discussed.

  3. Nietzsche em voga

    OpenAIRE

    Borromeu, Carlos

    2015-01-01

    Resumo:Texto publicado em 1941, na revista de orientação católica A Ordem, no Rio de Janeiro. Seu autor considera que Nietzsche teria negado a moral tradicional, concebendo em seu lugar outra, porém imoral e brutal. Acusa o filósofo, por fim, de ser responsável pela Guerra ora e curso na Europa. Abstract:Text published in 1941 in the Catholic orientation magazine, A Ordem, in Rio de Janeiro. The author believes that Nietzsche would have denied traditional morality, conceiving another in it...

  4. Primeiras frases em Libras

    OpenAIRE

    Comissão Editorial

    2017-01-01

    "Primeiras Frases em Libras" é um CD-ROM com interface interativa que tem por objetivo a iniciação na Língua Brasileira de Sinais - Libras. A partir de temas do cotidiano, permite à criança relacionar a imagem a uma estrutura frasal da Libras de forma lúdica, contribuindo para aquisição de conceitos e aspectos culturais. Para a utilização desse material é importante que sejam identificadas as diferenças regionais existentes em alguns sinais e que sejam adaptadas para a Libras local, tornando-...

  5. Identification and Determination of <em>Aconitum> Alkaloids in <em>Aconitum> Herbs and <em>Xiaohuoluo Pillem> Using UPLC-ESI-MS

    Directory of Open Access Journals (Sweden)

    Li Yang

    2012-08-01

    Full Text Available A rapid, specific, and sensitive ultra-performance liquid chromatography-electrospray ionization-mass spectrometry (UPLC-ESI-MS method to examine the chemical differences between <em>Aconitum> herbs and processed products has been developed and validated. Combined with chemometrics analysis of principal component analysis (PCA and orthogonal projection to latent structural discriminate analysis, diester-diterpenoid and monoester-type alkaloids, especially the five alkaloids which contributed to the chemical distinction between <em>Aconitum> herbs and processed products, namely mesaconitine (MA, aconitine (AC, hypaconitine (HA, benzoylmesaconitine (BMA, and benzoylhypaconitine (BHA, were picked out. Further, the five alkaloids and benzoylaconitine (BAC have been simultaneously determined in the <em>Xiaohuoluo pillem>. Chromatographic separations were achieved on a C18 column and peaks were detected by mass spectrometry in positive ion mode and selected ion recording (SIR mode. In quantitative analysis, the six alkaloids showed good regression, (<em>r> > 0.9984, within the test ranges. The lower limit quantifications (LLOQs for MA, AC, HA, BMA, BAC, and BHA were 1.41, 1.20, 1.92, 4.28, 1.99 and 2.02 ng·mL−1, respectively. Recoveries ranged from 99.7% to 101.7%. The validated method was applied successfully in the analysis of the six alkaloids from different samples, in which significant variations were revealed. Results indicated that the developed assay can be used as an appropriate quality control assay for <em>Xiaohuoluo pillem> and other herbal preparations containing <em>Aconitum> roots.

  6. Some Practical Payments Clearance Algorithms

    Science.gov (United States)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  7. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  8. Hill climbing algorithms and trivium

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2011-01-01

    This paper proposes a new method to solve certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations...

  9. Understanding Algorithms in Different Presentations

    Science.gov (United States)

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  10. Template Generation and Selection Algorithms

    NARCIS (Netherlands)

    Guo, Y.; Smit, Gerardus Johannes Maria; Broersma, Haitze J.; Heysters, P.M.; Badaway, W.; Ismail, Y.

    The availability of high-level design entry tooling is crucial for the viability of any reconfigurable SoC architecture. This paper presents a template generation method to extract functional equivalent structures, i.e. templates, from a control data flow graph. By inspecting the graph the algorithm

  11. Document Organization Using Kohonen's Algorithm.

    Science.gov (United States)

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  12. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  13. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    2012-11-15

    Nov 15, 2012 ... from electrons, muons and hadronic jets. These algorithms enable extended reach for the searches for MSSM Higgs, Z and other exotic particles. Keywords. CMS; tau; LHC; ECAL; HCAL. PACS No. 13.35.Dx. 1. Introduction. Tau is the heaviest known lepton (Mτ = 1.78 GeV) which decays into lighter leptons.

  14. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  15. Associative Algorithms for Computational Creativity

    Science.gov (United States)

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  16. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  17. Algorithms and Public Service Media

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....

  18. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Monroe, Scott; Cai, Li

    2014-01-01

    In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…

  19. The TROPOMI surface UV algorithm

    Science.gov (United States)

    Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna

    2018-02-01

    The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  20. The TROPOMI surface UV algorithm

    Directory of Open Access Journals (Sweden)

    A. V. Lindfors

    2018-02-01

    Full Text Available The TROPOspheric Monitoring Instrument (TROPOMI is the only payload of the Sentinel-5 Precursor (S5P, which is a polar-orbiting satellite mission of the European Space Agency (ESA. TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i daily dose or daily accumulated irradiance, (ii overpass dose rate or irradiance, and (iii local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2 satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  1. Optimization of Filter by using Support Vector Regression Machine with Cuckoo Search Algorithm

    Directory of Open Access Journals (Sweden)

    M. İlarslan

    2014-09-01

    Full Text Available Herein, a new methodology using a 3D Electromagnetic (EM simulator-based Support Vector Regression Machine (SVRM models of base elements is presented for band-pass filter (BPF design. SVRM models of elements, which are as fast as analytical equations and as accurate as a 3D EM simulator, are employed in a simple and efficient Cuckoo Search Algorithm (CSA to optimize an ultra-wideband (UWB microstrip BPF. CSA performance is verified by comparing it with other Meta-Heuristics such as Genetic Algorithm (GA and Particle Swarm Optimization (PSO. As an example of the proposed design methodology, an UWB BPF that operates between the frequencies of 3.1 GHz and 10.6 GHz is designed, fabricated and measured. The simulation and measurement results indicate in conclusion the superior performance of this optimization methodology in terms of improved filter response characteristics like return loss, insertion loss, harmonic suppression and group delay.

  2. Comparative analysis of distributed power control algorithms in CDMA

    OpenAIRE

    Abdulhamid, Mohanad F.

    2017-01-01

    This paper presents comparative analysis of various algorithms of distributed power control used in Code Division Multiple Access (CDMA) systems. These algorithms include Distributed Balancing power control algorithm (DB), Modified Distributed Balancing power control algorithm (MDB), Fully Distributed Power Control algorithm (FDPC), Distributed Power Control algorithm (DPC), Distributed Constrained Power Control algorithm (DCPC), Unconstrained Second-Order Power Control algorithm (USOPC), Con...

  3. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    Science.gov (United States)

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  4. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  5. Fused Entropy Algorithm in Optical Computed Tomography

    Directory of Open Access Journals (Sweden)

    Xiong Wan

    2014-02-01

    Full Text Available In most applications of optical computed tomography (OpCT, limited-view problems are often encountered, which can be solved to a certain extent with typical OpCT reconstructive algorithms. The concept of entropy first emerged in information theory has been introduced into OpCT algorithms, such as maximum entropy (ME algorithms and cross entropy (CE algorithms, which have demonstrated their superiority over traditional OpCT algorithms, yet have their own limitations. A fused entropy (FE algorithm, which follows an optimized criterion combining self-adaptively ME with CE, is proposed and investigated by comparisons with ME, CE and some traditional OpCT algorithms. Reconstructed results of several physical models show this FE algorithm has a good convergence and can achieve better precision than other algorithms, which verifies the feasibility of FE as an approach of optimizing computation, not only for OpCT, but also for other image processing applications.

  6. Espondiloptose em atleta

    Directory of Open Access Journals (Sweden)

    Ana Paula Luppino Assad

    2014-06-01

    Full Text Available Os atletas adolescentes estão sob maior risco de lombalgia e lesões estruturais da coluna. A espondilólise é responsável pela maioria das lombalgias em jovens esportistas e raramente ocorre em adultos. Relatamos o caso de uma paciente de 13 anos, atleta de judô, que chegou a nosso serviço com quadro de cinco meses de lombalgia progressiva durante os treinos, sendo inicialmente atribuída a causas mecânicas, sem que houvesse uma investigação mais detalhada por métodos de imagem. Na admissão já apresentava deformidade lombar, postura antálgica e manobra de hiperextensão lombar em unipodálico positiva bilateralmente. Realizou-se investigação, que evidenciou espondiloptose, sendo, então, submetida a tratamento cirúrgico. Com base neste relato de caso, discutimos a abordagem diagnóstica de lombalgia em atletas jovens, uma vez que a queixa de lombalgia crônica pode ser marcador de uma lesão estrutural, a qual pode ser definitiva e trazer perda funcional irreversível.

  7. Accountability em listas abertas

    Directory of Open Access Journals (Sweden)

    Luis Felipe Miguel

    2010-10-01

    Full Text Available O artigo discute criticamente a percepção, corrente em estudos sobre o sistema eleitoral brasileiro, de que a representação proporcional com listas abertas é um obstáculo à efetivação da accountability. Tal percepção está, em grande medida, baseada em visões equivocadas sobre a natureza do vínculo eleitoral no Brasil e o sentido da accountability, vista como uma relação entre o eleitor e seu candidato e não entre os constituintes e os representantes. O foco nas deficiências do sistema eleitoral, por outro lado, leva a ofuscar outros aspectos mais importantes para o aprimoramento da representação, relacionados à democratização da informação e ao fortalecimento da sociedade civil. Mesmo os problemas identificados nas listas abertas são melhor enfrentados com a ampliação do debate público e o fortalecimento da sociedade civil, que permitiria aos eleitores aproveitar de forma mais consistente as oportunidade de escolha que lhe são oferecidas, mais amplas do que em outros sistemas eleitorais.

  8. Linear Bregman algorithm implemented in parallel GPU

    Science.gov (United States)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  9. Fumigant Antifungal Activity of Myrtaceae Essential Oils and Constituents from <em>Leptospermum petersoniiem> against Three <em>Aspergillus> Species

    Directory of Open Access Journals (Sweden)

    Il-Kwon Park

    2012-09-01

    Full Text Available Commercial plant essential oils obtained from 11 Myrtaceae plant species were tested for their fumigant antifungal activity against <em>Aspergillus ochraceusem>, <em>A. flavusem>, and <em>A. nigerem>. Essential oils extracted from<em> em>Leptospermum> <em>petersonii> at air concentrations of 56 × 10−3 mg/mL and 28 × 10−3 mg/mL completely inhibited the growth of the three <em>Aspergillus> species. However, at an air concentration of 14 × 10−3 mg/mL, inhibition rates of <em>L. petersoniiem> essential oils were reduced to 20.2% and 18.8% in the case of <em>A. flavusem> and <em>A. nigerem>, respectively. The other Myrtaceae essential oils (56 × 10−3 mg/mL only weakly inhibited the fungi or had no detectable affect. Gas chromatography-mass spectrometry analysis identified 16 compounds in <em>L. petersoniiem>> em>essential> em>oil.> em>The antifungal activity of the identified compounds was tested individually by using standard or synthesized compounds. Of these, neral and geranial inhibited growth by 100%, at an air concentration of 56 × 10−3 mg/mL, whereas the activity of citronellol was somewhat lover (80%. The other compounds exhibited only moderate or weak antifungal activity. The antifungal activities of blends of constituents identified in <em>L. petersoniiem> oil indicated that neral and geranial were the major contributors to the fumigant and antifungal activities.

  10. Algorithms

    Indian Academy of Sciences (India)

    immediate successor as well as the immediate predecessor explicitly. Such a list is referred to as a doubly linked list. A typical doubly linked list is shown in Figure 3f. The ability to get to either the successor or predecessor not only makes access easy but also enables one to backtrack in a search. Two Dimensional Arrays: It ...

  11. Algorithms

    Indian Academy of Sciences (India)

    SERIES I ARTICLE. Table 2 Merging two sorted arrays. procedure MERGE_TWO _ARRA YS(A[I,p], B(1, q], C[I,p+q]:integer);. (* A[l,p], B[l, q] are the sorted arrays to be merged and placed in array C. *). (* Note that array C will be oflength p+q; in the program we use parameters *). (* p and q explicidy *) var i, j, k: integer; begin.

  12. Algorithms

    Indian Academy of Sciences (India)

    like programming language. Recursion. One of the usual techniques of problem solving is to break the problem into smaller problems. From the solution of these smaller problems, one obtains a solution for the original problem. Consider the procedural abstraction described above. It is possible to visualize the given ...

  13. Algorithms

    Indian Academy of Sciences (India)

    guesses for the technique discussed above. The method described above for computing the approximate square root is referred to as Newton's method for finding..Ja after the famous English mathematician Isaac Newton. In Table 5, we have essentially solved the nonlinear equation. RESONANCE I March 1996 - ---- .

  14. Algorithms

    Indian Academy of Sciences (India)

    In the previous article of this series, we looked at simple data types and their representation in computer memory. The notion of a simple data type can be extended to denote a set of elements corresponding to one data item at a higher level. The process of structuring or grouping of the basic data elements is often referred ...

  15. Algorithms

    Indian Academy of Sciences (India)

    var A: array [looN, 100M] of integer;. The above declaration denotes that A is an array having N rows and M columns. Applications for arrays are innumerable; the simplest being the classical multiplication table. A table can also be used to store hostel room numbers and codes of the persons staying in the respective rooms.

  16. Algorithms

    Indian Academy of Sciences (India)

    1 It must be noted that if the input assertion is not satisfied at this point, then any output assertion holds due to the classical implication operator. ..... on our intuitive knowledge about the underlying theory. The above processes can be formalised in a logical framework without relying on the intuitive deductions we have used.

  17. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  18. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  19. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Science.gov (United States)

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  20. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  1. A comparison between two global optimization algorithms (genetic and differential evolution) to calculate the reflection coefficients in fractured media; Uma comparacao entre dois algoritmos de otimizacao global (algoritmo genetico e evolucao diferencial) para inversao de coeficientes de reflexao em meios fraturados

    Energy Technology Data Exchange (ETDEWEB)

    Vanzeler, Francisco Joclean Alves

    1999-06-01

    In this work, we extract the elastic stiffness and mass density from an multi azimuthal qP-wave reflection coefficients at an interface separating two anisotropic media with monoclinic symmetry with at least one of its planes of symmetry parallel to the interface. This objective was reach by forward and inverse modeling. We calculate the q-P-wave reflection for three models (I, II, III) of anisotropic equivalent medium: isotropic medium above a TIH medium; TIV medium above a TIH medium; and orthorhombic medium above a TIH medium. The TIH medium is equivalent an isotropic fractured medium with equivalent elastic stiffness and mass density calculated by the Hudson formulation. The reflection coefficients used was on its exact form and was generated for models I, II and III in multi-azimuthal/incidence angles and contaminated by gaussian noise. In the inverse modeling we work with GA and with DE algorithms to calculate the inversion parameter (5 elastic stiffness and mass density for bottom media and Vs of upper isotropic media) by minimization of 12 norm of difference between the true and synthetic reflection coefficient. We assume that we know the parameter of the upper media of the three models, except Vs for model one in especial case of inversion of upper media.The parameter to be determined by inverse modeling are parametrized in model space for values that is in according with the value of the observed velocity of propagation of elastic waves in the earth crust, and the resolution of measure, and constraints of elastic stability of the solid media. The GA and DE algorithms reached good inversion to the models with at least three azimuthal angles, (0 deg C, 45 deg C and 90 deg C) and incidence angles of 34 deg C for model I, and 50 deg C inverted only by GA for models II and III; and the especial case take by DE that need at least 44 deg C to invert the model I with the Vs of the upper media. From this results we can see the potential to determine from q

  2. Aplicação do algoritmo S-SEBI na obtenção da evapotranspiração diária em condições áridas Application of the S-SEBI algorithm to obtain the daily evapotranspiration in arid conditions

    Directory of Open Access Journals (Sweden)

    Carlos Antonio Costa dos Santos

    2010-09-01

    Full Text Available O principal objetivo desse estudo foi determinar a evapotranspiração real diária (ETr da vegetação tamarisk através de técnicas micrometeorológicas e de sensoriamento remoto, e validar os resultados da ETr estimados pelo sensoriamento remoto. Foram utilizados dados micrometeorológicos provinientes do método da razão de Bowen, além do algoritmo S-SEBI aplicado a imagens do Landsat 5 - TM, na obtenção da ETr diária da vegetação tamarisk do Baixo Rio Colorado, CA/EUA. Na obtenção da evapotranspiração de referência (ET0 foram utilizados os dados de uma estação meteorológica e o método utilizado foi o de FAO/Penman-Monteith. Foram observados que as estimativas da ETr pelo algoritmo S-SEBI são similares aos valores medidos na torre micrometeorológica, assim como, que a identificação da dinâmica da vegetação através da distribuição espacial da evapotranspiração, evidencia a aplicabilidade do método na obtenção da evapotranspiração real diária.The main objective of this study was to determine the daily actual evapotranspiration (ETr of the tamarisk vegetation through micrometeorological and remote sensing techniques and to validate the results of ETr obtained by remote sensing. Micrometeorological data to obtain the daily ETr of the tamarisk vegetation in the Lower Colorado River, CA/USA provided by the Bowen ratio method, besides of the S-SEBI algorithm applied to TM Landsat-5 images were used. For obtaining the reference evapotranspiration (ET0 the weather station data were used and the FAO /Penman-Monteith method was applied. It was observed that the estimates of ETr by S-SEBI algorithm are similar to the measured values by the Bowen ratio method, as well as, that the identification of the vegetation dynamics through the spatial distribution of the evapotranspiration evidences the method applicability in obtaining the daily actual evapotranspiration.

  3. Iodo em alimentos consumidos em Portugal

    OpenAIRE

    Coelho, Inês; Delgado, Inês; Costa, Sofia; Castanheira, Isabel; Calhau, Maria Antónia

    2015-01-01

    O iodo é um elemento vestigial essencial na dieta humana e animal, com uma importância nutricional bem estabelecida. É indispensável para a síntese das hormonas da tiroide, tiroxina e triiodotirosina, cujo principal papel está relacionado com o crescimento e desenvolvimento dos órgãos, em particular do cérebro. A fonte natural de iodo são os alimentos. Contudo, de acordo com a OMS um terço da população mundial sofre de algum tipo de carência de iodo. A deficiência crônica de iodo pode levar a...

  4. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  5. MUSIC algorithms for rebar detection

    International Nuclear Information System (INIS)

    Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela

    2013-01-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)

  6. A fast meteor detection algorithm

    Science.gov (United States)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  7. An NOy* Algorithm for SOLVE

    Science.gov (United States)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; hide

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  8. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  9. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  10. Algorithms for Lightweight Key Exchange.

    Science.gov (United States)

    Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio

    2017-06-27

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.

  11. Innovations in Lattice QCD Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  12. MUSIC algorithms for rebar detection

    Science.gov (United States)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  13. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems

  14. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    ) and contact number (CN) measures only. We show that the HSE measure is much more information-rich than CN, nevertheless, HSE does not appear to provide enough information to reconstruct the C-traces of real-sized proteins. Our experiments also show that using tabu search (with our novel tabu definition......The problem of predicting the three-dimensional structure of a protein given its amino acid sequence is one of the most important open problems in bioinformatics. One of the carbon atoms in amino acids is the C-atom and the overall structure of a protein is often represented by a so-called C...... is competitive in quality and speed with other state-of-the-art decoy generation algorithms. Our third C-trace reconstruction approach is based on bee-colony optimization [24]. We demonstrate why this algorithm has some important properties that makes it suitable for protein structure prediction. Our approach...

  15. A branch-and-cut SDP-based algorithm for minimum sum-of-squares clustering

    Directory of Open Access Journals (Sweden)

    Daniel Aloise

    2009-12-01

    Full Text Available Minimum sum-of-squares clustering (MSSC consists in partitioning a given set of n points into k clusters in order to minimize the sum of squared distances from the points to the centroid of their cluster. Recently, Peng & Xia (2005 established the equivalence between 0-1 semidefinite programming (SDP and MSSC. In this paper, we propose a branch-and-cut algorithm for the underlying 0-1 SDP model. The algorithm obtains exact solutions for fairly large data sets with computing times comparable with those of the best exact method found in the literature.Clusterização por soma mínima de distâncias quadráticas consiste em particionar um dado conjunto de n pontos em k clusters a fim de minimizar a soma das distâncias quadráticas entre os pontos e o centróide de seus respectivos clusters. Recentemente, Peng & Xia (2005 estabeleceram a equivalência entre o problema e programação semidefinida 0-1. Neste artigo, um algoritmo branch-and-cut é proposto para o modelo baseado em programação semidefinida 0-1. O algoritmo obtém soluções exatas para instâncias reais de grande porte em tempos computacionais comparáveis àqueles do melhor método exato proposto na literatura.

  16. Computed laminography and reconstruction algorithm

    International Nuclear Information System (INIS)

    Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)

  17. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  18. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  19. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  20. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  1. Efecto de extractos vegetales de <em>Polygonum hydropiperoidesem>, <em>Solanum nigrumem> y <em>Calliandra pittieriem> sobre el gusano cogollero (<em>Spodoptera frugiperdaem>

    Directory of Open Access Journals (Sweden)

    Lizarazo H. Karol

    2008-12-01

    Full Text Available

    El gusano cogollero <em>Spodoptera frugiperdaem> es una de las plagas que más afectan los cultivos en la región de Sumapaz (Cundinamarca, Colombia. En la actualidad se controla principalmente aplicando productos de síntesis química, sin embargo la aplicación de extractos vegetales surge como una alternativa de menor impacto sobre el ambiente. Este control se emplea debido a que las plantas contienen metabolitos secundarios que pueden inhibir el desarrollo de los insectos. Por tal motivo, la presente investigación evaluó el efecto insecticida y antialimentario de extractos vegetales de barbasco <em>Polygonum hydropiperoidesem> (Polygonaceae, carbonero <em>Calliandra pittieriem> (Mimosaceae y hierba mora <em>Solanum nigrumem> (Solanaceae sobre larvas de <em>S. frugiperdaem> biotipo maíz. Se estableció una cría masiva del insecto en el laboratorio utilizando una dieta natural con hojas de maíz. Posteriormente se obtuvieron extractos vegetales utilizando solventes de alta polaridad (agua y etanol y media polaridad (diclorometano los cuales se aplicaron sobre las larvas de segundo instar. Los resultados más destacados se presentaron con extractos de <em>P. hydropiperoidesem>, obtenidos con diclorometano en sus diferentes dosis, con los cuales se alcanzó una mortalidad de 100% 12 días después de la aplicación y un efecto antialimentario representado por un consumo de follaje de maíz inferior al 4%, efectos similares a los del testigo comercial (Clorpiriphos.

  2. Developing State and National Evaluation Infrastructures- Guidance for the Challenges and Opportunities of EM&V

    Energy Technology Data Exchange (ETDEWEB)

    Schiller, Steven R.; Goldman, Charles A.

    2011-06-24

    Evaluating the impacts and effectiveness of energy efficiency programs is likely to become increasingly important for state policymakers and program administrators given legislative mandates and regulatory goals and increasing reliance on energy efficiency as a resource. In this paper, we summarize three activities that the authors have conducted that highlight the expanded role of evaluation, measurement and verification (EM&V): a study that identified and analyzed challenges in improving and scaling up EM&V activities; a scoping study that identified issues involved in developing a national efficiency EM&V standard; and lessons learned from providing technical assistance on EM&V issues to states that are ramping up energy efficiency programs. The lessons learned are summarized in 13 EM&V issues that policy makers should address in each jurisdiction and which are listed and briefly described. The paper also discusses how improving the effectiveness and reliability of EM&V will require additional capacity building, better access to existing EM&V resources, new methods to address emerging issues and technologies, and perhaps foundational documents and approaches to improving the credibility and cross jurisdictional comparability of efficiency investments. Two of the potential foundational documents discussed are a national EM&V standard or resource guide and regional deemed savings and algorithm databases.

  3. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  4. Teaching Multiplication Algorithms from Other Cultures

    Science.gov (United States)

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  5. Algorithm for Shaffer's Multiple Comparison Tests.

    Science.gov (United States)

    Rasmussen, Jeffrey Lee

    1993-01-01

    J. P. Shaffer has presented two tests to improve the power of multiple comparison procedures. This article described an algorithm to carry out the tests. The logic of the algorithm and an application to a data set are given. (SLD)

  6. Trilateral market coupling. Algorithm appendix

    International Nuclear Information System (INIS)

    2006-03-01

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in

  7. An Uncertainty Analysis for Predicting Soil Profile Salinity Using EM Induction Data

    Science.gov (United States)

    Huang, Jingyi; Monteiro Santos, Fernando; Triantafilis, John

    2016-04-01

    Proximal soil sensing techniques such as electromagnetic (EM) induction have been used to identify and map the areal variation of average soil properties. However, soil varies with depth owing to the action of various soil forming factors (e.g., parent material and topography). In this work we collected EM data using an EM38 and EM34 meter along a 22-km transect in the Trangie District, Australia.We jointly inverted these data using EM4Soil software and compare our 2-dimensional model of true electrical conductivity (sigma - mS/m) with depth against measured electrical conductivity of a saturated soil-paste extract (ECe - dS/m) at depth of 0-16 m. Through the use of a linear regression (LR) model and by varying forward modelling algorithms (cumulative function and full solution), inversion algorithms (S1 and S2), and damping factor (lambda) we determined a suitable electromagnetic conductivity image (EMCI) which was optimal when using the full solution, S2 and lambda = 0.6. To evaluate uncertainty of the inversion process and the LR model, we conducted an uncertainty analysis. The distribution of the model misfit shows the largest uncertainty caused by inversion (mostly due to EM34-40) occurs at deeper profiles while the largest uncertainty of the LR model occurs where the soil profile is most saline. These uncertainty maps also illustrate us how the model accuracy can be improved in the future.

  8. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  9. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  10. A Deterministic and Polynomial Modified Perceptron Algorithm

    Directory of Open Access Journals (Sweden)

    Olof Barr

    2006-01-01

    Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.

  11. A Euclidean algorithm for integer matrices

    DEFF Research Database (Denmark)

    Lauritzen, Niels; Thomsen, Jesper Funch

    2015-01-01

    We present a Euclidean algorithm for computing a greatest common right divisor of two integer matrices. The algorithm is derived from elementary properties of finitely generated modules over the ring of integers.......We present a Euclidean algorithm for computing a greatest common right divisor of two integer matrices. The algorithm is derived from elementary properties of finitely generated modules over the ring of integers....

  12. A New Perspective on Randomized Gossip Algorithms

    OpenAIRE

    Loizou, Nicolas; Richtárik, Peter

    2016-01-01

    In this short note we propose a new approach for the design and analysis of randomized gossip algorithms which can be used to solve the average consensus problem. We show how that Randomized Block Kaczmarz (RBK) method - a method for solving linear systems - works as gossip algorithm when applied to a special system encoding the underlying network. The famous pairwise gossip algorithm arises as a special case. Subsequently, we reveal a hidden duality of randomized gossip algorithms, with the ...

  13. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior -point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  14. Primeiras frases em Libras

    Directory of Open Access Journals (Sweden)

    Comissão Editorial

    2017-02-01

    Full Text Available "Primeiras Frases em Libras" é um CD-ROM com interface interativa que tem por objetivo a iniciação na Língua Brasileira de Sinais - Libras. A partir de temas do cotidiano, permite à criança relacionar a imagem a uma estrutura frasal da Libras de forma lúdica, contribuindo para aquisição de conceitos e aspectos culturais. Para a utilização desse material é importante que sejam identificadas as diferenças regionais existentes em alguns sinais e que sejam adaptadas para a Libras local, tornando-se mais um exercício enriquecedor para aquisição e prática da Língua de Sinais. Maiores informações sobre o material no site da Editora Arara Azul: www.editora-arara-azul.com. br

  15. Quedas em idosos

    OpenAIRE

    Meneses, Joana Gonçalves de

    2016-01-01

    Trabalho final do mestrado em medicina do desporto com vista à atribuição do Grau de Mestre (área científica de geriatria), apresentado à Faculdade de Medicina da Universidade de Coimbra As quedas na população idosa constituem um grande problema de saúde pública, tendo em conta a sua dimensão física, psicológica, económica e social. Subjacente à ocorrência destes acontecimentos existe uma série de fatores de risco, como a idade, falta de equilíbrio, sedentarismo, doenças crónicas, a polime...

  16. Hipervitaminose D em animais

    Directory of Open Access Journals (Sweden)

    Paulo V. Peixoto

    2012-07-01

    Full Text Available Por meio de revisão da literatura, são apresentados dados referentes ao metabolismo da vitamina D, bem como aos principais aspectos toxicológicos, clínicos, bioquímicos, macroscópicos, microscópicos, ultraestruturais, imuno-histoquímicos e radiográficos de animais intoxicados natural e experimentalmente por essa vitamina, em diferentes espécies. Este estudo objetiva demonstrar a existência de muitas lacunas no conhecimento sobre mineralização fisiológica e patológica, em especial na mediação hormonal do fenômeno, bem como alertar para os riscos de ocorrência dessa intoxicação.

  17. Exploring applications of crowdsourcing to cryo-EM.

    Science.gov (United States)

    Bruggemann, Jacob; Lander, Gabriel C; Su, Andrew I

    2018-02-24

    Extraction of particles from cryo-electron microscopy (cryo-EM) micrographs is a crucial step in processing single-particle datasets. Although algorithms have been developed for automatic particle picking, these algorithms generally rely on two-dimensional templates for particle identification, which may exhibit biases that can propagate artifacts through the reconstruction pipeline. Manual picking is viewed as a gold-standard solution for particle selection, but it is too time-consuming to perform on data sets of thousands of images. In recent years, crowdsourcing has proven effective at leveraging the open web to manually curate datasets. In particular, citizen science projects such as Galaxy Zoo have shown the power of appealing to users' scientific interests to process enormous amounts of data. To this end, we explored the possible applications of crowdsourcing in cryo-EM particle picking, presenting a variety of novel experiments including the production of a fully annotated particle set from untrained citizen scientists. We show the possibilities and limitations of crowdsourcing particle selection tasks, and explore further options for crowdsourcing cryo-EM data processing. Copyright © 2018. Published by Elsevier Inc.

  18. Engineering a cache-oblivious sorting algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  19. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    Science.gov (United States)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  20. Discrete Riccati equation solutions: Distributed algorithms

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  1. Successive combination jet algorithm for hadron collisions

    International Nuclear Information System (INIS)

    Ellis, S.D.; Soper, D.E.

    1993-01-01

    Jet finding algorithms, as they are used in e + e- and hadron collisions, are reviewed and compared. It is suggested that a successive combination style algorithm, similar to that used in e + e- physics, might be useful also in hadron collisions, where cone style algorithms have been used previously

  2. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  3. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  4. Storage capacity of the Tilinglike Learning Algorithm

    International Nuclear Information System (INIS)

    Buhot, Arnaud; Gordon, Mirta B.

    2001-01-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered

  5. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  6. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  7. Portfolio selection using genetic algorithms | Yahaya | International ...

    African Journals Online (AJOL)

    In this paper, one of the nature-inspired evolutionary algorithms – a Genetic Algorithms (GA) was used in solving the portfolio selection problem (PSP). Based on a real dataset from a popular stock market, the performance of the algorithm in relation to those obtained from one of the popular quadratic programming (QP) ...

  8. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  9. Hardware Acceleration of Sparse Cognitive Algorithms

    Science.gov (United States)

    2016-05-01

    is clear that these emerging algorithms that can support unsupervised , or lightly supervised learning , as well as incremental learning , map poorly...distribution unlimited. 8.0 CONCLUDING REMARKS These emerging algorithms that can support unsupervised , or lightly supervised learning , as well as...15. SUBJECT TERMS Cortical Algorithms; Machine Learning ; Hardware; VLSI; ASIC 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR

  10. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  11. CORANTES ARTIFICIAIS EM ALIMENTOS

    Directory of Open Access Journals (Sweden)

    Marcelo Alexandre PRADO

    2009-07-01

    Full Text Available

    O emprego de aditivos químicos é, sem dúvida, um dos mais polêmicos avanços alcançados pela indústria de alimentos. Os corantes artificiais pertencem a uma dessas classes de aditivos alimentares e têm sido objeto de muitas críticas, já que seu uso em muitos alimentos justifica-se apenas por questões de hábitos alimentares. Ainda existem diferentes opiniões quanto à inocuidade dos diversos corantes artificiais. Visando, principalmente, o controle no uso dos corantes sintéticos, mas tendo em vista que produtos coloridos artificialmente são exportados e importados, a análise desses aditivos requer métodos eficientes e rápidos para a detecção, identificação e quantificação. A cromatografia em papel e em camada delgada, apesar de serem técnicas relativamente rápidas, apresentam dados com baixa exatidão e precisão. Já na cromatografia líquida de alta eficiência (CLAE as maiores dificuldades encontram-se nas etapas de extração, mas principalmente no alto custo do equipamento. A eletroforese capilar apresenta os mesmos problemas da CLAE, aliados ao fato de se tratar de uma técnica relativamente recente para a análise desse tipo de substância e, portanto, existem poucos estudos a cerca da determinação e quantificação. PALAVRAS-CHAVE: Corantes artificiais; análise; legislação; CLAE; EC

  12. O estresse em escolares

    Directory of Open Access Journals (Sweden)

    Marilda E. Novaes Lipp

    Full Text Available A presença de sintomas de estresse foi pesquisada em uma amostra de 255 escolares de 7 a 14 anos de idade, oriundos de três tipos diferentes de escolas (municipal, particular e particular confessional filantrópica. Os dados foram analisados em termos de diferenças entre as escolas, sexo e série do ensino fundamental em que as crianças se encontravam. Constatou-se que o tipo de escola tinha uma forte associação com o nível de estresse dos alunos e que o número de meninas com estresse era significativamente maior do que o dos meninos. Verificou-se também que o estresse diminui nas séries mais elevadas e estava mais presente na primeira série. Pode-se concluir que as escolas têm um papel relevante no estresse infantil e que é possível dentro de uma escola apresentar níveis baixos de estresse, dependendo das características da mesma.

  13. Rapid Development of Microsatellite Markers with 454 Pyrosequencing in a Vulnerable Fish<em>,> the Mottled Skate<em>, Raja em>pulchra>

    Directory of Open Access Journals (Sweden)

    Jung-Ha Kang

    2012-06-01

    Full Text Available The mottled skate, <em>Raja pulchraem>, is an economically valuable fish. However, due to a severe population decline, it is listed as a vulnerable species by the International Union for Conservation of Nature. To analyze its genetic structure and diversity, microsatellite markers were developed using 454 pyrosequencing. A total of 17,033 reads containing dinucleotide microsatellite repeat units (mean, 487 base pairs were identified from 453,549 reads. Among 32 loci containing more than nine repeat units, 20 primer sets (62% produced strong PCR products, of which 14 were polymorphic. In an analysis of 60 individuals from two <em>R. pulchra em>populations, the number of alleles per locus ranged from 1–10, and the mean allelic richness was 4.7. No linkage disequilibrium was found between any pair of loci, indicating that the markers were independent. The Hardy–Weinberg equilibrium test showed significant deviation in two of the 28 single-loci after sequential Bonferroni’s correction. Using 11 primer sets, cross-species amplification was demonstrated in nine related species from four families within two classes. Among the 11 loci amplified from three other <em>Rajidae> family species; three loci were polymorphic. A monomorphic locus was amplified in all three <em>Rajidae> family species and the <em>Dasyatidae> family. Two <em>Rajidae> polymorphic loci amplified monomorphic target DNAs in four species belonging to the Carcharhiniformes class, and another was polymorphic in two Carcharhiniformes species.

  14. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  15. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  16. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering.

    Science.gov (United States)

    Bettinardi, V; Alenius, S; Numminen, P; Teräs, M; Gilardi, M C; Fazio, F; Ruotsalainen, U

    2003-02-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  17. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  18. Solidariedade em Redes

    Directory of Open Access Journals (Sweden)

    Angie Gomes Gomes Biondi

    2015-03-01

    Full Text Available O discurso da comum humanidade que vinculava sofredor e espectador na base de uma moralidade piedosa caducou frente às solicitações de uma sociedade tecnológica, multicultural e pluralista. Ao repertório do protesto e da denúncia, como instrumentos privilegiados da modernidade, prevalecem agora novos apelos à chamada “sensibilidade humanitária” (CHOULIARAKI, 2013 posta diretamente a cada sujeito social conectado. Deste modo, uma profusão de causas individuais tem se amontoado nas redes sociais (não raro, multiplicadas pelos meios de comunicação tradicionais todos os dias. Causas que se declaram legítimas e justificáveis em tempos de uma precária e insuficiente participação do Estado e que se orientam para uma ação direta às vítimas e oprimidos. Contudo, as interações afetivas que subjazem estes apelos solidários se coadunam à lógica de um capitalismo flexível que toma a própria vida em sua vertente criativa, como núcleo de produção econômica, ou seja, como forma de capitalização da própria vida cotidiana. Neste texto buscamos desenvolver uma etapa descritiva destas convocações solidárias como uma prática baseada na lógica conexionista que tem vigorado em nossa sociedade. Alguns casos são trazidos para pensar o lugar da vítima enquanto instância privilegiada de sua própria enunciação, os mecanismos de visibilidade que são articulados na comunicação modulada pelas redes e em que medida é possível pensar tais práticas como uma espécie de atualização das ações solidárias baseadas em uma “política do conexionismo”, conforme indicam os estudos de Boltanski e Chiapello (2013.

  19. Acarofauna em plantas ornamentais

    Directory of Open Access Journals (Sweden)

    Jania Claudia Camilo dos Santos

    2014-10-01

    Full Text Available Normal 0 21 false false false PT-BR X-NONE X-NONE O cultivo e o comercio de plantas ornamentais vem cada vez mais ganhando espaço no Brasil, pela grande variedade das espécies existentes e exuberância de suas flores, que oferecem uma maior riqueza ao local. Dessa forma, o objetivo desse trabalho foi realizar o levantamento da população de ácaros associados às plantas ornamentais no município de Arapiraca-AL, em função dos diversos problemas acarretados por essa espécie. O levantamento foi realizado entre os meses de abril a março, através de amostragens mensais de folhas coletadas da parte basal, intermediária e apical de plantas existentes em praças e jardins. Foram coletados 55 ácaros pertencentes à ordem Prostigmata em 20 famílias de plantas. As plantas com as maiores riquezas de ácaros foram as Coleus blumei L. e Bxuxus sempervirens L., que apresentaram 65% dos valores amostrais. Analisando-se as coletas realizadas, pode-se observar que houve uma maior incidência populacional de ácaros na coleta do mês de maio, cuja percentagem foi de 36% de ácaros levantados, sendo que no levantamento dos dados amostrais de março a percentagem encontrada foi de 14%, nas amostragens dos meses de abril e junho, a percentagem amostrada dos dados foi de 22 e 28%, respectivamente. O estudo do levantamento de ácaros em plantas ornamentais permitiu observar a relação entre ácaros e a relação com a planta hospedeira, facilitando posteriormente um estudo mais aprofundado sobre plantas hospedeiras, e pode-se observar que em períodos chuvosos ocorre uma menor incidência populacional.

  20. Filtering algorithm for dotted interferences

    Energy Technology Data Exchange (ETDEWEB)

    Osterloh, K., E-mail: kurt.osterloh@bam.de [Federal Institute for Materials Research and Testing (BAM), Division VIII.3, Radiological Methods, Unter den Eichen 87, 12205 Berlin (Germany); Buecherl, T.; Lierse von Gostomski, Ch. [Technische Universitaet Muenchen, Lehrstuhl fuer Radiochemie, Walther-Meissner-Str. 3, 85748 Garching (Germany); Zscherpel, U.; Ewert, U. [Federal Institute for Materials Research and Testing (BAM), Division VIII.3, Radiological Methods, Unter den Eichen 87, 12205 Berlin (Germany); Bock, S. [Technische Universitaet Muenchen, Lehrstuhl fuer Radiochemie, Walther-Meissner-Str. 3, 85748 Garching (Germany)

    2011-09-21

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  1. How <em>Varroa> Parasitism Affects the Immunological and Nutritional Status of the Honey Bee, <em>Apis melliferaem>

    Directory of Open Access Journals (Sweden)

    Katherine A. Aronstein

    2012-06-01

    Full Text Available We investigated the effect of the parasitic mite <em>Varroa destructorem> on the immunological and nutritional condition of honey bees, <em>Apis melliferaem>, from the perspective of the individual bee and the colony. Pupae, newly-emerged adults and foraging adults were sampled from honey bee colonies at one site in S. Texas, USA. <em>Varroa>‑infested bees displayed elevated titer of Deformed Wing Virus (DWV, suggestive of depressed capacity to limit viral replication. Expression of genes coding three anti-microbial peptides (<em>defensin1, abaecin, hymenoptaecinem> was either not significantly different between <em>Varroa>-infested and uninfested bees or was significantly elevated in <em>Varroa>-infested bees, varying with sampling date and bee developmental age. The effect of <em>Varroa> on nutritional indices of the bees was complex, with protein, triglyceride, glycogen and sugar levels strongly influenced by life-stage of the bee and individual colony. Protein content was depressed and free amino acid content elevated in <em>Varroa>-infested pupae, suggesting that protein synthesis, and consequently growth, may be limited in these insects. No simple relationship between the values of nutritional and immune-related indices was observed, and colony-scale effects were indicated by the reduced weight of pupae in colonies with high <em>Varroa> abundance, irrespective of whether the individual pupa bore <em>Varroa>.

  2. Deacidification of <em>Pistacia> <em>chinensis> Oil as a Promising Non-Edible Feedstock for Biodiesel Production in China

    Directory of Open Access Journals (Sweden)

    Yuan Meng

    2012-07-01

    Full Text Available <em>Pistacia> <em>chinensis> seed oil is proposed as a promising non-edible feedstock for biodiesel production. Different extraction methods were tested and compared to obtain crude oil from the seed of <em>Pistacia> <em>chinensis>, along with various deacidification measures of refined oil. The biodiesel was produced through catalysis of sodium hydroxide (NaOH and potassium hydroxide (KOH. The results showed that the acid value of <em>Pistacia> <em>chinensis> oil was successfully reduced to 0.23 mg KOH/g when it was extracted using ethanol. Consequently, the biodiesel product gave a high yield beyond 96.0%. The transesterification catalysed by KOH was also more complete. Fourier transform infrared (FTIR spectroscopy was used to monitor the transesterification reaction. Analyses by gas chromatography-mass spectrometry (GC-MS and gas chromatography with a flame ionisation detector (GC-FID certified that the <em>Pistacia> <em>chinensis> biodiesel mainly consisted of C18 fatty acid methyl esters (81.07% with a high percentage of methyl oleate. Furthermore, the measured fuel properties of the biodiesel met the required standards for fuel use. In conclusion, the <em>Pistacia> <em>chinensis> biodiesel is a qualified and feasible substitute for fossil diesel.

  3. On subspecific taxonomy of <em>Microtus saviiem> (Rodentia, Arvicolidae

    Directory of Open Access Journals (Sweden)

    Longino Contoli

    2003-10-01

    Full Text Available Riassunto Sulla tassonomia sottospecifica di <em>Microtus saviiem> (Rodentia, Arvicolidae Viene riveduta e riassunta la situazione tassonomica sottospecifica di <em>Microtus (Terricola saviiem>, anche tramite la descrizione di due nuovi taxa: <em>Microtus (Terricola savii tolfetanusem>, dei Monti della Tolfa e <em>Microtus (Terricola savii niethammericusem>, del Gargano.

  4. Statistical Mechanics Algorithms and Computations

    CERN Document Server

    Krauth, Werner

    2006-01-01

    This book discusses the computational approach in modern statistical physics, adopting simple language and an attractive format of many illustrations, tables and printed algorithms. The discussion of key subjects in classical and quantum statistical physics will appeal to students, teachers and researchers in physics and related sciences. The focus is on orientation with implementation details kept to a minimum. - ;This book discusses the computational approach in modern statistical physics in a clear and accessible way and demonstrates its close relation to other approaches in theoretical phy

  5. Algorithms for optimizing drug therapy

    Directory of Open Access Journals (Sweden)

    Martin Lene

    2004-07-01

    Full Text Available Abstract Background Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. Methods One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface and JavaScript (program logic. Results Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs

  6. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  7. Algorithmes Efficaces en Calcul Formel

    OpenAIRE

    Bostan, Alin; Chyzak, Frédéric; Giusti, Marc; Lebreton, Romain; Lecerf, Grégoire; Salvy, Bruno; Schost, Eric

    2017-01-01

    Voir la page du livre à l’adresse \\url{https://hal.archives-ouvertes.fr/AECF/}; International audience; Le calcul formel traite des objets mathématiques exacts d’un point de vue informatique. Cet ouvrage « Algorithmes efficaces en calcul formel » explore deux directions : la calculabilité et la complexité. La calculabilité étudie les classes d’objets mathématiques sur lesquelles des réponses peuvent être obtenues algorithmiquement. La complexité donne ensuite des outils pour comparer des algo...

  8. Integrated Association Rules Complete Hiding Algorithms

    Directory of Open Access Journals (Sweden)

    Mohamed Refaat Abdellah

    2017-01-01

    Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.

  9. New Algorithm For Calculating Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Piotr Lipinski

    2009-04-01

    Full Text Available In this article we introduce a new algorithm for computing Discrete Wavelet Transforms (DWT. The algorithm aims at reducing the number of multiplications, required to compute a DWT. The algorithm is general and can be used to compute a variety of wavelet transform (Daubechies and CDF. Here we focus on CDF 9/7 filters, which are used in JPEG2000 compression standard. We show that the algorithm outperforms convolution-based and lifting-based algorithms in terms of number of multiplications.

  10. MSDR-D Network Localization Algorithm

    Science.gov (United States)

    Coogan, Kevin; Khare, Varun; Kobourov, Stephen G.; Katz, Bastian

    We present a distributed multi-scale dead-reckoning (MSDR-D) algorithm for network localization that utilizes local distance and angular information for nearby sensors. The algorithm is anchor-free and does not require particular network topology, rigidity of the underlying communication graph, or high average connectivity. The algorithm scales well to large and sparse networks with complex topologies and outperforms previous algorithms when the noise levels are high. The algorithm is simple to implement and is available, along with source code, executables, and experimental results, at http://msdr-d.cs.arizona.edu/.

  11. New algorithms for binary wavefront optimization

    Science.gov (United States)

    Zhang, Xiaolong; Kner, Peter

    2015-03-01

    Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.

  12. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  13. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  14. Smell Detection Agent Based Optimization Algorithm

    Science.gov (United States)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  15. Learning algorithms and automatic processing of languages

    International Nuclear Information System (INIS)

    Fluhr, Christian Yves Andre

    1977-01-01

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts

  16. Active noise cancellation algorithms for impulsive noise.

    Science.gov (United States)

    Li, Peng; Yu, Xun

    2013-04-01

    Impulsive noise is an important challenge for the practical implementation of active noise control (ANC) systems. The advantages and disadvantages of popular filtered- X least mean square (FXLMS) ANC algorithm and nonlinear filtered-X least mean M-estimate (FXLMM) algorithm are discussed in this paper. A new modified FXLMM algorithm is also proposed to achieve better performance in controlling impulsive noise. Computer simulations and experiments are carried out for all three algorithms and the results are presented and analyzed. The results show that the FXLMM and modified FXLMM algorithms are more robust in suppressing the adverse effect of sudden large amplitude impulses than FXLMS algorithm, and in particular, the proposed modified FXLMM algorithm can achieve better stability without sacrificing the performance of residual noise when encountering impulses.

  17. Formal verification of a deadlock detection algorithm

    Directory of Open Access Journals (Sweden)

    Freek Verbeek

    2011-10-01

    Full Text Available Deadlock detection is a challenging issue in the analysis and design of on-chip networks. We have designed an algorithm to detect deadlocks automatically in on-chip networks with wormhole switching. The algorithm has been specified and proven correct in ACL2. To enable a top-down proof methodology, some parts of the algorithm have been left unimplemented. For these parts, the ACL2 specification contains constrained functions introduced with defun-sk. We used single-threaded objects to represent the data structures used by the algorithm. In this paper, we present details on the proof of correctness of the algorithm. The process of formal verification was crucial to get the algorithm flawless. Our ultimate objective is to have an efficient executable, and formally proven correct implementation of the algorithm running in ACL2.

  18. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    and enhance the global search ability. A large number of tests show that the proposed algorithm has higher convergence speed and better optimizing ability than quantum evolutionary algorithm, real-coded quantum evolutionary algorithm and hybrid quantum genetic algorithm. Tests also show that when chaos......A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form...... a perfect distribution in feasible solution space in advantage of randomicity and non-repetitive ergodicity of chaos, the simple quantum rotation gate to update non-optimal individuals of population to reduce amount of computation, and the hybrid chaotic search strategy to speed up its convergence...

  19. An Algorithm for Successive Identification of Reflections

    DEFF Research Database (Denmark)

    Hansen, Kim Vejlby; Larsen, Jan

    1994-01-01

    A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which the inves......A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which...... on a synthetic CMP gather, whereas the other is based on a real recorded CMP gather. Initially, the algorithm requires an estimate of the wavelet that can be performed by any wavelet estimation method.>...

  20. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.