WorldWideScience

Sample records for vector replacement algorithm

  1. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  2. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  3. Parallel/vector algorithms for the spherical SN transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.; Mattis, R.E.

    1990-01-01

    This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

  4. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  5. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  6. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  7. Automated Vectorization of Decision-Based Algorithms

    Science.gov (United States)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  8. Researches on Key Algorithms in Analogue Seismogram Records Vectorization

    Directory of Open Access Journals (Sweden)

    Maofa WANG

    2014-09-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In our study, a new tracing algorithm for simulated seismogram curves based on visual filed feature is presented. We also give out the technological process to vectorizing simulated seismograms, and an analog seismic record vectorization system has been accomplished independently. Using it, we can precisely and speedy vectorize analog seismic records (need professionals to participate interactively.

  9. Support vector machines and evolutionary algorithms for classification single or together?

    CERN Document Server

    Stoean, Catalin

    2014-01-01

    When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this ‘masked hero’ be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.

  10. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  11. Support vector machines optimization based theory, algorithms, and extensions

    CERN Document Server

    Deng, Naiyang; Zhang, Chunhua

    2013-01-01

    Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

  12. ALGORITHM OF SAR SATELLITE ATTITUDE MEASUREMENT USING GPS AIDED BY KINEMATIC VECTOR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, in order to improve the accuracy of the Synthetic Aperture Radar (SAR)satellite attitude using Global Positioning System (GPS) wide-band carrier phase, the SAR satellite attitude kinematic vector and Kalman filter are introduced. Introducing the state variable function of GPS attitude determination algorithm in SAR satellite by means of kinematic vector and describing the observation function by the GPS wide-band carrier phase, the paper uses the Kalman filter algorithm to obtian the attitude variables of SAR satellite. Compared the simulation results of Kalman filter algorithm with the least square algorithm and explicit solution, it is indicated that the Kalman filter algorithm is the best.

  13. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  14. Flash-Aware Page Replacement Algorithm

    Directory of Open Access Journals (Sweden)

    Guangxia Xu

    2014-01-01

    Full Text Available Due to the limited main memory resource of consumer electronics equipped with NAND flash memory as storage device, an efficient page replacement algorithm called FAPRA is proposed for NAND flash memory in the light of its inherent characteristics. FAPRA introduces an efficient victim page selection scheme taking into account the benefit-to-cost ratio for evicting each victim page candidate and the combined recency and frequency value, as well as the erase count of the block to which each page belongs. Since the dirty victim page often contains clean data that exist in both the main memory and the NAND flash memory based storage device, FAPRA only writes the dirty data within the victim page back to the NAND flash memory based storage device in order to reduce the redundant write operations. We conduct a series of trace-driven simulations and experimental results show that our proposed FAPRA algorithm outperforms the state-of-the-art algorithms in terms of page hit ratio, the number of write operations, runtime, and the degree of wear leveling.

  15. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Directory of Open Access Journals (Sweden)

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  16. The Short-Term Power Load Forecasting Based on Sperm Whale Algorithm and Wavelet Least Square Support Vector Machine with DWT-IR for Feature Selection

    Directory of Open Access Journals (Sweden)

    Jin-peng Liu

    2017-07-01

    Full Text Available Short-term power load forecasting is an important basis for the operation of integrated energy system, and the accuracy of load forecasting directly affects the economy of system operation. To improve the forecasting accuracy, this paper proposes a load forecasting system based on wavelet least square support vector machine and sperm whale algorithm. Firstly, the methods of discrete wavelet transform and inconsistency rate model (DWT-IR are used to select the optimal features, which aims to reduce the redundancy of input vectors. Secondly, the kernel function of least square support vector machine LSSVM is replaced by wavelet kernel function for improving the nonlinear mapping ability of LSSVM. Lastly, the parameters of W-LSSVM are optimized by sperm whale algorithm, and the short-term load forecasting method of W-LSSVM-SWA is established. Additionally, the example verification results show that the proposed model outperforms other alternative methods and has a strong effectiveness and feasibility in short-term power load forecasting.

  17. A New Waveform Mosaic Algorithm in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-11-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In this paper, a new waveform mosaic algorithm in the vectorization of paper seismograms is presented. We also give out the technological process to waveform mosaic, and a waveform mosaic system used to vectorize analog seismic record has been accomplished independently. Using it, we can precisely and speedy accomplish waveform mosaic for vectorizing analog seismic records.

  18. Global and Local Page Replacement Algorithms on Virtual Memory Systems for Image Processing

    OpenAIRE

    WADA, Ben Tsutom

    1985-01-01

    Three virtual memory systems for image processing, different one another in frame allocation algorithms and page replacement algorithms, were examined experimentally upon their page-fault characteristics. The hypothesis, that global page replacement algorithms are susceptible to thrashing, held in the raster scan experiment, while it did not in another non raster-scan experiment. The results of the experiments may be useful also in making parallel image processors more efficient, while they a...

  19. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    Science.gov (United States)

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  20. A Semi-Vectorization Algorithm to Synthesis of Gravitational Anomaly Quantities on the Earth

    Science.gov (United States)

    Abdollahzadeh, M.; Eshagh, M.; Najafi Alamdari, M.

    2009-04-01

    The Earth's gravitational potential can be expressed by the well-known spherical harmonic expansion. The computational time of summing up this expansion is an important practical issue which can be reduced by an efficient numerical algorithm. This paper proposes such a method for block-wise synthesizing the anomaly quantities on the Earth surface using vectorization. Fully-vectorization means transformation of the summations to the simple matrix and vector products. It is not a practical for the matrices with large dimensions. Here a semi-vectorization algorithm is proposed to avoid working with large vectors and matrices. It speeds up the computations by using one loop for the summation either on degrees or on orders. The former is a good option to synthesize the anomaly quantities on the Earth surface considering a digital elevation model (DEM). This approach is more efficient than the two-step method which computes the quantities on the reference ellipsoid and continues them upward to the Earth surface. The algorithm has been coded in MATLAB which synthesizes a global grid of 5′Ã- 5′ (corresponding 9 million points) of gravity anomaly or geoid height using a geopotential model to degree 360 in 10000 seconds by an ordinary computer with 2G RAM.

  1. Fast vector quantization using a Bat algorithm for image compression

    Directory of Open Access Journals (Sweden)

    Chiranjeevi Karri

    2016-06-01

    Full Text Available Linde–Buzo–Gray (LBG, a traditional method of vector quantization (VQ generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO and Firefly algorithm (FA generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brighter fireflies in the search space respectively. In this paper, we propose a new algorithm called BA-LBG which uses Bat Algorithm on initial solution of LBG. It produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats. From the results, we observed that BA-LBG has high PSNR compared to LBG, PSO-LBG, Quantum PSO-LBG, HBMO-LBG and FA-LBG, and its average convergence speed is 1.841 times faster than HBMO-LBG and FA-LBG but no significance difference with PSO.

  2. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  3. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  4. Parallel-Vector Algorithm For Rapid Structural Anlysis

    Science.gov (United States)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  5. A reduce and replace strategy for suppressing vector-borne diseases: insights from a stochastic, spatial model.

    Directory of Open Access Journals (Sweden)

    Kenichi W Okamoto

    Full Text Available Two basic strategies have been proposed for using transgenic Aedes aegypti mosquitoes to decrease dengue virus transmission: population reduction and population replacement. Here we model releases of a strain of Ae. aegypti carrying both a gene causing conditional adult female mortality and a gene blocking virus transmission into a wild population to assess whether such releases could reduce the number of competent vectors. We find this "reduce and replace" strategy can decrease the frequency of competent vectors below 50% two years after releases end. Therefore, this combined approach appears preferable to releasing a strain carrying only a female-killing gene, which is likely to merely result in temporary population suppression. However, the fixation of anti-pathogen genes in the population is unlikely. Genetic drift at small population sizes and the spatially heterogeneous nature of the population recovery after releases end prevent complete replacement of the competent vector population. Furthermore, releasing more individuals can be counter-productive in the face of immigration by wild-type mosquitoes, as greater population reduction amplifies the impact wild-type migrants have on the long-term frequency of the anti-pathogen gene. We expect the results presented here to give pause to expectations for driving an anti-pathogen construct to fixation by relying on releasing individuals carrying this two-gene construct. Nevertheless, in some dengue-endemic environments, a spatially heterogeneous decrease in competent vectors may still facilitate decreasing disease incidence.

  6. Face recognition algorithm using extended vector quantization histogram features.

    Science.gov (United States)

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  7. Solution of single linear tridiagonal systems and vectorization of the ICCG algorithm on the Cray 1

    International Nuclear Information System (INIS)

    Kershaw, D.S.

    1981-01-01

    The numerical algorithms used to solve the physics equation in codes which model laser fusion are examined, it is found that a large number of subroutines require the solution of tridiagonal linear systems of equations. One dimensional radiation transport, thermal and suprathermal electron transport, ion thermal conduction, charged particle and neutron transport, all require the solution of tridiagonal systems of equations. The standard algorithm that has been used in the past on CDC 7600's will not vectorize and so cannot take advantage of the large speed increases possible on the Cray-1 through vectorization. There is however, an alternate algorithm for solving tridiagonal systems, called cyclic reduction, which allows for vectorization, and which is optimal for the Cray-1. Software based on this algorithm is now being used in LASNEX to solve tridiagonal linear systems in the subroutines mentioned above. The new algorithm runs as much as five times faster than the standard algorithm on the Cray-1. The ICCG method is being used to solve the diffusion equation with a nine-point coupling scheme on the CDC 7600. In going from the CDC 7600 to the Cray-1, a large part of the algorithm consists of solving tridiagonal linear systems on each L line of the Lagrangian mesh in a manner which is not vectorizable. An alternate ICCG algorithm for the Cray-1 was developed which utilizes a block form of the cyclic reduction algorithm. This new algorithm allows full vectorization and runs as much as five times faster than the old algorithm on the Cray-1. It is now being used in Cray LASNEX to solve the two-dimensional diffusion equation in all the physics subroutines mentioned above

  8. Genetic stability of gene targeted immunoglobulin loci. I. Heavy chain isotype exchange induced by a universal gene replacement vector.

    Science.gov (United States)

    Kardinal, C; Selmayr, M; Mocikat, R

    1996-11-01

    Gene targeting at the immunoglobulin loci of B cells is an efficient tool for studying immunoglobulin expression or generating chimeric antibodies. We have shown that vector integration induced by human immunoglobulin G1 (IgG1) insertion vectors results in subsequent vector excision mediated by the duplicated target sequence, whereas replacement events which could be induced by the same constructs remain stable. We could demonstrate that the distribution of the vector homology strongly influences the genetic stability obtained. To this end we developed a novel type of a heavy chain replacement vector making use of the heavy chain class switch recombination sequence. Despite the presence of a two-sided homology this construct is universally applicable irrespective of the constant gene region utilized by the B cell. In comparison to an integration vector the frequency of stable incorporation was strongly increased, but we still observed vector excision, although at a markedly reduced rate. The latter events even occurred with circular constructs. Linearization of the construct at various sites and the comparison with an integration vector that carries the identical homology sequence, but differs in the distribution of homology, revealed the following features of homologous recombination of immunoglobulin genes: (i) the integration frequency is only determined by the length of the homology flank where the cross-over takes place; (ii) a 5' flank that does not meet the minimum requirement of homology length cannot be complemented by a sufficient 3' flank; (iii) free vector ends play a role for integration as well as for replacement targeting; (iv) truncating recombination events are suppressed in the presence of two flanks. Furthermore, we show that the switch region that was used as 3' flank is non-functional in an inverted orientation.

  9. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    Directory of Open Access Journals (Sweden)

    Zhongyi Hu

    2013-01-01

    Full Text Available Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA based memetic algorithm (FA-MA to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  10. SOLAR FLARE PREDICTION USING SDO/HMI VECTOR MAGNETIC FIELD DATA WITH A MACHINE-LEARNING ALGORITHM

    International Nuclear Information System (INIS)

    Bobra, M. G.; Couvidat, S.

    2015-01-01

    We attempt to forecast M- and X-class solar flares using a machine-learning algorithm, called support vector machine (SVM), and four years of data from the Solar Dynamics Observatory's Helioseismic and Magnetic Imager, the first instrument to continuously map the full-disk photospheric vector magnetic field from space. Most flare forecasting efforts described in the literature use either line-of-sight magnetograms or a relatively small number of ground-based vector magnetograms. This is the first time a large data set of vector magnetograms has been used to forecast solar flares. We build a catalog of flaring and non-flaring active regions sampled from a database of 2071 active regions, comprised of 1.5 million active region patches of vector magnetic field data, and characterize each active region by 25 parameters. We then train and test the machine-learning algorithm and we estimate its performances using forecast verification metrics with an emphasis on the true skill statistic (TSS). We obtain relatively high TSS scores and overall predictive abilities. We surmise that this is partly due to fine-tuning the SVM for this purpose and also to an advantageous set of features that can only be calculated from vector magnetic field data. We also apply a feature selection algorithm to determine which of our 25 features are useful for discriminating between flaring and non-flaring active regions and conclude that only a handful are needed for good predictive abilities

  11. A Modified Method Combined with a Support Vector Machine and Bayesian Algorithms in Biological Information

    Directory of Open Access Journals (Sweden)

    Wen-Gang Zhou

    2015-06-01

    Full Text Available With the deep research of genomics and proteomics, the number of new protein sequences has expanded rapidly. With the obvious shortcomings of high cost and low efficiency of the traditional experimental method, the calculation method for protein localization prediction has attracted a lot of attention due to its convenience and low cost. In the machine learning techniques, neural network and support vector machine (SVM are often used as learning tools. Due to its complete theoretical framework, SVM has been widely applied. In this paper, we make an improvement on the existing machine learning algorithm of the support vector machine algorithm, and a new improved algorithm has been developed, combined with Bayesian algorithms. The proposed algorithm can improve calculation efficiency, and defects of the original algorithm are eliminated. According to the verification, the method has proved to be valid. At the same time, it can reduce calculation time and improve prediction efficiency.

  12. A Semisupervised Support Vector Machines Algorithm for BCI Systems

    Science.gov (United States)

    Qin, Jianzhao; Li, Yuanqing; Sun, Wei

    2007-01-01

    As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

  13. Kochen-Specker vectors

    International Nuclear Information System (INIS)

    Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

    2005-01-01

    We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

  14. Efficient four fragment cloning for the construction of vectors for targeted gene replacement in filamentous fungi

    DEFF Research Database (Denmark)

    Frandsen, Rasmus John Normand; Andersson, Jens A.; Kristensen, Matilde Bylov

    2008-01-01

    Background: The rapid increase in whole genome fungal sequence information allows large scale functional analyses of target genes. Efficient transformation methods to obtain site-directed gene replacement, targeted over-expression by promoter replacement, in-frame epitope tagging or fusion...... of coding sequences with fluorescent markers such as GFP are essential for this process. Construction of vectors for these experiments depends on the directional cloning of two homologous recombination sequences on each side of a selection marker gene. Results: Here, we present a USER Friendly cloning based...

  15. Evaluation of Chinese Calligraphy by Using DBSC Vectorization and ICP Algorithm

    Directory of Open Access Journals (Sweden)

    Mengdi Wang

    2016-01-01

    Full Text Available Chinese calligraphy is a charismatic ancient art form with high artistic value in Chinese culture. Virtual calligraphy learning system is a research hotspot in recent years. In such system, a judging mechanism for user’s practice result is quite important. Sometimes, user’s handwritten character is not that standard, the size and position are not fixed, and the whole character may be even askew, which brings difficulty for its evaluation. In this paper, we propose an approach by using DBSCs (disk B-spline curves vectorization and ICP (iterative closest point algorithm, which cannot only evaluate a calligraphic character without knowing what it is, but also deal with the above problems commendably. Firstly we find the promising candidate characters from the database according to the angular difference relations as quickly as possible. Then we check these vectorized candidates by using ICP algorithm based upon the skeleton, hence finding out the best matching character. Finally a comprehensive evaluation involving global (the whole character and local (strokes similarities is implemented, and a final composited evaluation score can be worked out.

  16. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit; Pfluger, Dirk; Murarasu, Alin; Jacob, Riko

    2012-01-01

    performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations

  17. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    Energy Technology Data Exchange (ETDEWEB)

    He, Hongxing; Fang, Hengrui [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States); Miller, Mitchell D. [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Phillips, George N. Jr [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Department of Biochemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Su, Wu-Pei, E-mail: wpsu@uh.edu [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States)

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.

  18. Analysis of human protein replacement stable cell lines established using snoMEN-PR vector.

    Directory of Open Access Journals (Sweden)

    Motoharu Ono

    Full Text Available The study of the function of many human proteins is often hampered by technical limitations, such as cytotoxicity and phenotypes that result from overexpression of the protein of interest together with the endogenous version. Here we present the snoMEN (snoRNA Modulator of gene ExpressioN vector technology for generating stable cell lines where expression of the endogenous protein can be reduced and replaced by an exogenous protein, such as a fluorescent protein (FP-tagged version. SnoMEN are snoRNAs engineered to contain complementary sequences that can promote knock-down of targeted RNAs. We have established and characterised two such partial protein replacement human cell lines (snoMEN-PR. Quantitative mass spectrometry was used to analyse the specificity of knock-down and replacement at the protein level and also showed an increased pull-down efficiency of protein complexes containing exogenous, tagged proteins in the protein replacement cell lines, as compared with conventional co-expression strategies. The snoMEN approach facilitates the study of mammalian proteins, particularly those that have so far been difficult to investigate by exogenous expression and has wide applications in basic and applied gene-expression research.

  19. Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem

    International Nuclear Information System (INIS)

    Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.

    2013-01-01

    Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER GPU code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)

  20. Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2013-07-01

    Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)

  1. PSCAD modeling of a two-level space vector pulse width modulation algorithm for power electronics education

    Directory of Open Access Journals (Sweden)

    Ahmet Mete Vural

    2016-09-01

    Full Text Available This paper presents the design details of a two-level space vector pulse width modulation algorithm in PSCAD that is able to generate pulses for three-phase two-level DC/AC converters with two different switching patterns. The presented FORTRAN code is generic and can be easily modified to meet many other kinds of space vector modulation strategies. The code is also editable for hardware programming. The new component is tested and verified by comparing its output as six gating signals with those of a similar component in MATLAB library. Moreover the component is used to generate digital signals for closed-loop control of STATCOM for reactive power compensation in PSCAD. This add-on can be an effective tool to give students better understanding of the space vector modulation algorithm for different control tasks in power electronics area, and can motivate them for learning.

  2. A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-02-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.

  3. On efficient randomized algorithms for finding the PageRank vector

    Science.gov (United States)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  4. A Novel Integrated Algorithm for Wind Vector Retrieval from Conically Scanning Scatterometers

    Directory of Open Access Journals (Sweden)

    Xuetong Xie

    2013-11-01

    Full Text Available Due to the lower efficiency and the larger wind direction error of traditional algorithms, a novel integrated wind retrieval algorithm is proposed for conically scanning scatterometers. The proposed algorithm has the dual advantages of less computational cost and higher wind direction retrieval accuracy by integrating the wind speed standard deviation (WSSD algorithm and the wind direction interval retrieval (DIR algorithm. It adopts wind speed standard deviation as a criterion for searching possible wind vector solutions and retrieving a potential wind direction interval based on the change rate of the wind speed standard deviation. Moreover, a modified three-step ambiguity removal method is designed to let more wind directions be selected in the process of nudging and filtering. The performance of the new algorithm is illustrated by retrieval experiments using 300 orbits of SeaWinds/QuikSCAT L2A data (backscatter coefficients at 25 km resolution and co-located buoy data. Experimental results indicate that the new algorithm can evidently enhance the wind direction retrieval accuracy, especially in the nadir region. In comparison with the SeaWinds L2B Version 2 25 km selected wind product (retrieved wind fields, an improvement of 5.1° in wind direction retrieval can be made by the new algorithm for that region.

  5. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    Directory of Open Access Journals (Sweden)

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  6. Pair- ${v}$ -SVR: A Novel and Efficient Pairing nu-Support Vector Regression Algorithm.

    Science.gov (United States)

    Hao, Pei-Yi

    This paper proposes a novel and efficient pairing nu-support vector regression (pair--SVR) algorithm that combines successfully the superior advantages of twin support vector regression (TSVR) and classical -SVR algorithms. In spirit of TSVR, the proposed pair--SVR solves two quadratic programming problems (QPPs) of smaller size rather than a single larger QPP, and thus has faster learning speed than classical -SVR. The significant advantage of our pair--SVR over TSVR is the improvement in the prediction speed and generalization ability by introducing the concepts of the insensitive zone and the regularization term that embodies the essence of statistical learning theory. Moreover, pair--SVR has additional advantage of using parameter for controlling the bounds on fractions of SVs and errors. Furthermore, the upper bound and lower bound functions of the regression model estimated by pair--SVR capture well the characteristics of data distributions, thus facilitating automatic estimation of the conditional mean and predictive variance simultaneously. This may be useful in many cases, especially when the noise is heteroscedastic and depends strongly on the input values. The experimental results validate the superiority of our pair--SVR in both training/prediction speed and generalization ability.This paper proposes a novel and efficient pairing nu-support vector regression (pair--SVR) algorithm that combines successfully the superior advantages of twin support vector regression (TSVR) and classical -SVR algorithms. In spirit of TSVR, the proposed pair--SVR solves two quadratic programming problems (QPPs) of smaller size rather than a single larger QPP, and thus has faster learning speed than classical -SVR. The significant advantage of our pair--SVR over TSVR is the improvement in the prediction speed and generalization ability by introducing the concepts of the insensitive zone and the regularization term that embodies the essence of statistical learning theory

  7. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  8. A median filter approach for correcting errors in a vector field

    Science.gov (United States)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  9. Sub-Circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems

    Science.gov (United States)

    2008-12-16

    of Defense, or the United States Government . AFIT/GCO/ENG/09-02 Sub-circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems... unicorns and random programs”. Communications and Computer Networks, 24–30. 2005. 87 Vita Eric D. Simonaire graduated from Granite Baptist Church School in...Service to attend the Air Force Institute of Technol- ogy in 2007. Upon graduation, he will serve the federal government in an Information Assurance

  10. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  11. Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model

    International Nuclear Information System (INIS)

    Hong, W.-C.

    2009-01-01

    Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)

  12. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

    Science.gov (United States)

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

  13. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  14. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  15. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

    Science.gov (United States)

    Bai, Chen; Han, Dongjuan

    2018-04-01

    MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

  16. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Xigao Shao

    2013-01-01

    Full Text Available Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs. In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO- type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

  17. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  18. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  19. Screw Remaining Life Prediction Based on Quantum Genetic Algorithm and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Xiaochen Zhang

    2017-01-01

    Full Text Available To predict the remaining life of ball screw, a screw remaining life prediction method based on quantum genetic algorithm (QGA and support vector machine (SVM is proposed. A screw accelerated test bench is introduced. Accelerometers are installed to monitor the performance degradation of ball screw. Combined with wavelet packet decomposition and isometric mapping (Isomap, the sensitive feature vectors are obtained and stored in database. Meanwhile, the sensitive feature vectors are randomly chosen from the database and constitute training samples and testing samples. Then the optimal kernel function parameter and penalty factor of SVM are searched with the method of QGA. Finally, the training samples are used to train optimized SVM while testing samples are adopted to test the prediction accuracy of the trained SVM so the screw remaining life prediction model can be got. The experiment results show that the screw remaining life prediction model could effectively predict screw remaining life.

  20. Replacement method and enhanced replacement method versus the genetic algorithm approach for the selection of molecular descriptors in QSPR/QSAR theories.

    Science.gov (United States)

    Mercader, Andrew G; Duchowicz, Pablo R; Fernández, Francisco M; Castro, Eduardo A

    2010-09-27

    We compare three methods for the selection of optimal subsets of molecular descriptors from a much greater pool of such regression variables. On the one hand is our enhanced replacement method (ERM) and on the other is the simpler replacement method (RM) and the genetic algorithm (GA). These methods avoid the impracticable full search for optimal variables in large sets of molecular descriptors. Present results for 10 different experimental databases suggest that the ERM is clearly preferable to the GA that is slightly better than the RM. However, the latter approach requires the smallest amount of linear regressions and, consequently, the lowest computation time.

  1. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  2. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

    Science.gov (United States)

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385

  3. Vectorization of a penalty function algorithm for well scheduling

    Science.gov (United States)

    Absar, I.

    1984-01-01

    In petroleum engineering, the oil production profiles of a reservoir can be simulated by using a finite gridded model. This profile is affected by the number and choice of wells which in turn is a result of various production limits and constraints including, for example, the economic minimum well spacing, the number of drilling rigs available and the time required to drill and complete a well. After a well is available it may be shut in because of excessive water or gas productions. In order to optimize the field performance a penalty function algorithm was developed for scheduling wells. For an example with some 343 wells and 15 different constraints, the scheduling routine vectorized for the CYBER 205 averaged 560 times faster performance than the scalar version.

  4. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  5. Vectorization in quantum chemistry

    International Nuclear Information System (INIS)

    Saunders, V.R.

    1987-01-01

    It is argued that the optimal vectorization algorithm for many steps (and sub-steps) in a typical ab initio calculation of molecular electronic structure is quite strongly dependent on the target vector machine. Details such as the availability (or lack) of a given vector construct in the hardware, vector startup times and asymptotic rates must all be considered when selecting the optimal algorithm. Illustrations are drawn from: gaussian integral evaluation, fock matrix construction, 4-index transformation of molecular integrals, direct-CI methods, the matrix multiply operation. A cross comparison of practical implementations on the CDC Cyber 205, the Cray-IS and Cray-XMP machines is presented. To achieve portability while remaining optimal on a wide range of machines it is necessary to code all available algorithms in a machine independent manner, and to select the appropriate algorithm using a procedure which is based on machine dependent parameters. Most such parameters concern the timing of certain vector loop kernals, which can usually be derived from a 'bench-marking' routine executed prior to the calculation proper

  6. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  7. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  8. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit

    2012-06-01

    The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

  9. Effective data compaction algorithm for vector scan EB writing system

    Science.gov (United States)

    Ueki, Shinichi; Ashida, Isao; Kawahira, Hiroichi

    2001-01-01

    We have developed a new mask data compaction algorithm dedicated to vector scan electron beam (EB) writing systems for 0.13 μm device generation. Large mask data size has become a significant problem at mask data processing for which data compaction is an important technique. In our new mask data compaction, 'array' representation and 'cell' representation are used. The mask data format for the EB writing system with vector scan supports these representations. The array representation has a pitch and a number of repetitions in both X and Y direction. The cell representation has a definition of figure group and its reference. The new data compaction method has the following three steps. (1) Search arrays of figures by selecting pitches of array so that a number of figures are included. (2) Find out same arrays that have same repetitive pitch and number of figures. (3) Search cells of figures, where the figures in each cell take identical positional relationship. By this new method for the mask data of a 4M-DRAM block gate layer with peripheral circuits, 202 Mbytes without compaction was highly compacted to 6.7 Mbytes in 20 minutes on a 500 MHz PC.

  10. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  11. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Science.gov (United States)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  12. Cost Forecasting of Substation Projects Based on Cuckoo Search Algorithm and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2018-01-01

    Full Text Available Accurate prediction of substation project cost is helpful to improve the investment management and sustainability. It is also directly related to the economy of substation project. Ensemble Empirical Mode Decomposition (EEMD can decompose variables with non-stationary sequence signals into significant regularity and periodicity, which is helpful in improving the accuracy of prediction model. Adding the Gauss perturbation to the traditional Cuckoo Search (CS algorithm can improve the searching vigor and precision of CS algorithm. Thus, the parameters and kernel functions of Support Vector Machines (SVM model are optimized. By comparing the prediction results with other models, this model has higher prediction accuracy.

  13. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2012-01-01

    Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

  14. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  15. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    International Nuclear Information System (INIS)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

    2017-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  16. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  17. Thermodynamic analysis of refrigerant mixtures for possible replacements for CFCs by an algorithm compiling property data

    International Nuclear Information System (INIS)

    Arcaklioglu, Erol; Cavusoglu, Abdullah; Erisen, Ali

    2006-01-01

    In this study, we formed an algorithm to find refrigerant mixtures of equal volumetric cooling capacity (VCC) when compared to CFC based refrigerants in vapor compression refrigeration systems. To achieve this aim the point properties of the refrigerants are obtained from REFPROP where appropriate. We used replacement mixture ratios-of varying mass percentages-suggested by various authors along with our newly formed mixture ratios. In other words, we tried to see the effect of changing mass percentages of the suggested (i.e. in the literature) replacement refrigerants on the VCC of the cooling system. Secondly, we used this algorithm to calculate the coefficient of performance (COP) of the same refrigeration system. This mechanism has provided us the ability to compare the COP of the suggested refrigerant mixtures and our newly formed mixture ratios with the conventional CFC based ones. According to our results, for R12 R290/R600a (56/44) mixture, for R22 R32/R125/R134a (32.5/5/62.5) mixture, and for R502 R32/R125/R134a (43/5/52) mixture are appropriate and can be used as replacements

  18. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    Science.gov (United States)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  19. Applications of the Chaotic Quantum Genetic Algorithm with Support Vector Regression in Load Forecasting

    Directory of Open Access Journals (Sweden)

    Cheng-Wen Lee

    2017-11-01

    Full Text Available Accurate electricity forecasting is still the critical issue in many energy management fields. The applications of hybrid novel algorithms with support vector regression (SVR models to overcome the premature convergence problem and improve forecasting accuracy levels also deserve to be widely explored. This paper applies chaotic function and quantum computing concepts to address the embedded drawbacks including crossover and mutation operations of genetic algorithms. Then, this paper proposes a novel electricity load forecasting model by hybridizing chaotic function and quantum computing with GA in an SVR model (named SVRCQGA to achieve more satisfactory forecasting accuracy levels. Experimental examples demonstrate that the proposed SVRCQGA model is superior to other competitive models.

  20. Intra-operative Vector Flow Imaging Using Ultrasound of the Ascending Aorta among 40 Patients with Normal, Stenotic and Replaced Aortic Valves

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Møller-Sørensen, Hasse; Kjaergaard, Jesper

    2016-01-01

    Stenosis of the aortic valve gives rise to more complex blood flows with increased velocities. The angleindependent vector flow ultrasound technique transverse oscillation was employed intra-operatively on the ascending aorta of (I) 20 patients with a healthy aortic valve and 20 patients with aor...... replacement corrects some of these changes. Transverse oscillation may be useful for assessment of aortic stenosis and optimization of valve surgery. (E-mail: lindskov@gmail.com) 2016 World Federation for Ultrasound in Medicine & Biology...... with aortic stenosis before (IIa) and after (IIb) valve replacement. The results indicate that aortic stenosis increased flow complexity (p , 0.0001), induced systolic backflow (p , 0.003) and reduced systolic jet width (p , 0.0001). After valve replacement, the systolic backflow and jet width were normalized...

  1. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    International Nuclear Information System (INIS)

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-01-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  2. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  3. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  4. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  5. A Support Vector Machine Hydrometeor Classification Algorithm for Dual-Polarization Radar

    Directory of Open Access Journals (Sweden)

    Nicoletta Roberto

    2017-07-01

    Full Text Available An algorithm based on a support vector machine (SVM is proposed for hydrometeor classification. The training phase is driven by the output of a fuzzy logic hydrometeor classification algorithm, i.e., the most popular approach for hydrometer classification algorithms used for ground-based weather radar. The performance of SVM is evaluated by resorting to a weather scenario, generated by a weather model; the corresponding radar measurements are obtained by simulation and by comparing results of SVM classification with those obtained by a fuzzy logic classifier. Results based on the weather model and simulations show a higher accuracy of the SVM classification. Objective comparison of the two classifiers applied to real radar data shows that SVM classification maps are spatially more homogenous (textural indices, energy, and homogeneity increases by 21% and 12% respectively and do not present non-classified data. The improvements found by SVM classifier, even though it is applied pixel-by-pixel, can be attributed to its ability to learn from the entire hyperspace of radar measurements and to the accurate training. The reliability of results and higher computing performance make SVM attractive for some challenging tasks such as its implementation in Decision Support Systems for helping pilots to make optimal decisions about changes inthe flight route caused by unexpected adverse weather.

  6. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  7. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  8. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  9. Short-Term Wind Speed Forecasting Using Support Vector Regression Optimized by Cuckoo Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2015-01-01

    Full Text Available This paper develops an effectively intelligent model to forecast short-term wind speed series. A hybrid forecasting technique is proposed based on recurrence plot (RP and optimized support vector regression (SVR. Wind caused by the interaction of meteorological systems makes itself extremely unsteady and difficult to forecast. To understand the wind system, the wind speed series is analyzed using RP. Then, the SVR model is employed to forecast wind speed, in which the input variables are selected by RP, and two crucial parameters, including the penalties factor and gamma of the kernel function RBF, are optimized by various optimization algorithms. Those optimized algorithms are genetic algorithm (GA, particle swarm optimization algorithm (PSO, and cuckoo optimization algorithm (COA. Finally, the optimized SVR models, including COA-SVR, PSO-SVR, and GA-SVR, are evaluated based on some criteria and a hypothesis test. The experimental results show that (1 analysis of RP reveals that wind speed has short-term predictability on a short-term time scale, (2 the performance of the COA-SVR model is superior to that of the PSO-SVR and GA-SVR methods, especially for the jumping samplings, and (3 the COA-SVR method is statistically robust in multi-step-ahead prediction and can be applied to practical wind farm applications.

  10. Fast Monte Carlo reliability evaluation using support vector machine

    International Nuclear Information System (INIS)

    Rocco, Claudio M.; Moreno, Jose Ali

    2002-01-01

    This paper deals with the feasibility of using support vector machine (SVM) to build empirical models for use in reliability evaluation. The approach takes advantage of the speed of SVM in the numerous model calculations typically required to perform a Monte Carlo reliability evaluation. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replace system performance evaluation by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated by several examples. Excellent system reliability results are obtained by training a SVM with a small amount of information

  11. A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine.

    Directory of Open Access Journals (Sweden)

    Fei Gao

    Full Text Available For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a "soft-start" approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.

  12. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  13. A fingerprint key binding algorithm based on vector quantization and error correction

    Science.gov (United States)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  14. A Classification Detection Algorithm Based on Joint Entropy Vector against Application-Layer DDoS Attack

    Directory of Open Access Journals (Sweden)

    Yuntao Zhao

    2018-01-01

    Full Text Available The application-layer distributed denial of service (AL-DDoS attack makes a great threat against cyberspace security. The attack detection is an important part of the security protection, which provides effective support for defense system through the rapid and accurate identification of attacks. According to the attacker’s different URL of the Web service, the AL-DDoS attack is divided into three categories, including a random URL attack and a fixed and a traverse one. In order to realize identification of attacks, a mapping matrix of the joint entropy vector is constructed. By defining and computing the value of EUPI and jEIPU, a visual coordinate discrimination diagram of entropy vector is proposed, which also realizes data dimension reduction from N to two. In terms of boundary discrimination and the region where the entropy vectors fall in, the class of AL-DDoS attack can be distinguished. Through the study of training data set and classification, the results show that the novel algorithm can effectively distinguish the web server DDoS attack from normal burst traffic.

  15. DOA and Polarization Estimation Using an Electromagnetic Vector Sensor Uniform Circular Array Based on the ESPRIT Algorithm.

    Science.gov (United States)

    Wu, Na; Qu, Zhiyu; Si, Weijian; Jiao, Shuhong

    2016-12-13

    In array signal processing systems, the direction of arrival (DOA) and polarization of signals based on uniform linear or rectangular sensor arrays are generally obtained by rotational invariance techniques (ESPRIT). However, since the ESPRIT algorithm relies on the rotational invariant structure of the received data, it cannot be applied to electromagnetic vector sensor arrays (EVSAs) featuring uniform circular patterns. To overcome this limitation, a fourth-order cumulant-based ESPRIT algorithm is proposed in this paper, for joint estimation of DOA and polarization based on a uniform circular EVSA. The proposed algorithm utilizes the fourth-order cumulant to obtain a virtual extended array of a uniform circular EVSA, from which the pairs of rotation invariant sub-arrays are obtained. The ESPRIT algorithm and parameter pair matching are then utilized to estimate the DOA and polarization of the incident signals. The closed-form parameter estimation algorithm can effectively reduce the computational complexity of the joint estimation, which has been demonstrated by numerical simulations.

  16. Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere

    International Nuclear Information System (INIS)

    Qin Yi; Box, Michael A.

    2006-01-01

    Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function

  17. Raster images vectorization system

    OpenAIRE

    Genytė, Jurgita

    2006-01-01

    The problem of raster images vectorization was analyzed and researched in this work. Existing vectorization systems are quite expensive, the results are inaccurate, and the manual vectorization of a large number of drafts is impossible. That‘s why our goal was to design and develop a new raster images vectorization system using our suggested automatic vectorization algorithm and the way to record results in a new universal vectorial file format. The work consists of these main parts: analysis...

  18. Electric load forecasting by seasonal recurrent SVR (support vector regression) with chaotic artificial bee colony algorithm

    International Nuclear Information System (INIS)

    Hong, Wei-Chiang

    2011-01-01

    Support vector regression (SVR), with hybrid chaotic sequence and evolutionary algorithms to determine suitable values of its three parameters, not only can effectively avoid converging prematurely (i.e., trapping into a local optimum), but also reveals its superior forecasting performance. Electric load sometimes demonstrates a seasonal (cyclic) tendency due to economic activities or climate cyclic nature. The applications of SVR models to deal with seasonal (cyclic) electric load forecasting have not been widely explored. In addition, the concept of recurrent neural networks (RNNs), focused on using past information to capture detailed information, is helpful to be combined into an SVR model. This investigation presents an electric load forecasting model which combines the seasonal recurrent support vector regression model with chaotic artificial bee colony algorithm (namely SRSVRCABC) to improve the forecasting performance. The proposed SRSVRCABC employs the chaotic behavior of honey bees which is with better performance in function optimization to overcome premature local optimum. A numerical example from an existed reference is used to elucidate the forecasting performance of the proposed SRSVRCABC model. The forecasting results indicate that the proposed model yields more accurate forecasting results than ARIMA and TF-ε-SVR-SA models. Therefore, the SRSVRCABC model is a promising alternative for electric load forecasting. -- Highlights: → Hybridizing the seasonal adjustment and the recurrent mechanism into an SVR model. → Employing chaotic sequence to improve the premature convergence of artificial bee colony algorithm. → Successfully providing significant accurate monthly load demand forecasting.

  19. Fault Diagnosis of Plunger Pump in Truck Crane Based on Relevance Vector Machine with Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wenliao Du

    2013-01-01

    Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.

  20. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  1. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  2. Soft sensor development and optimization of the commercial petrochemical plant integrating support vector regression and genetic algorithm

    Directory of Open Access Journals (Sweden)

    S.K. Lahiri

    2009-09-01

    Full Text Available Soft sensors have been widely used in the industrial process control to improve the quality of the product and assure safety in the production. The core of a soft sensor is to construct a soft sensing model. This paper introduces support vector regression (SVR, a new powerful machine learning methodbased on a statistical learning theory (SLT into soft sensor modeling and proposes a new soft sensing modeling method based on SVR. This paper presents an artificial intelligence based hybrid soft sensormodeling and optimization strategies, namely support vector regression – genetic algorithm (SVR-GA for modeling and optimization of mono ethylene glycol (MEG quality variable in a commercial glycol plant. In the SVR-GA approach, a support vector regression model is constructed for correlating the process data comprising values of operating and performance variables. Next, model inputs describing the process operating variables are optimized using genetic algorithm with a view to maximize the process performance. The SVR-GA is a new strategy for soft sensor modeling and optimization. The major advantage of the strategies is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics etc. is not required. Using SVR-GA strategy, a number of sets of optimized operating conditions were found. The optimized solutions, when verified in an actual plant, resulted in a significant improvement in the quality.

  3. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  4. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  5. GPR identification of voids inside concrete based on the support vector machine algorithm

    International Nuclear Information System (INIS)

    Xie, Xiongyao; Li, Pan; Qin, Hui; Liu, Lanbo; Nobes, David C

    2013-01-01

    Voids inside reinforced concrete, which affect structural safety, are identified from ground penetrating radar (GPR) images using a completely automatic method based on the support vector machine (SVM) algorithm. The entire process can be characterized into four steps: (1) the original SVM model is built by training synthetic GPR data generated by finite difference time domain simulation and after data preprocessing, segmentation and feature extraction. (2) The classification accuracy of different kernel functions is compared with the cross-validation method and the penalty factor (c) of the SVM and the coefficient (σ2) of kernel functions are optimized by using the grid algorithm and the genetic algorithm. (3) To test the success of classification, this model is then verified and validated by applying it to another set of synthetic GPR data. The result shows a high success rate for classification. (4) This original classifier model is finally applied to a set of real GPR data to identify and classify voids. The result is less than ideal when compared with its application to synthetic data before the original model is improved. In general, this study shows that the SVM exhibits promising performance in the GPR identification of voids inside reinforced concrete. Nevertheless, the recognition of shape and distribution of voids may need further improvement. (paper)

  6. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

    Directory of Open Access Journals (Sweden)

    Sanjeevikumar Padmanaban

    2015-09-01

    Full Text Available This paper considered a six-phase (asymmetrical induction motor, kept 30° phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs and all four dc sources are deliberately kept isolated. Therefore, zero-sequence/homopolar current components cannot flow. The original and effective power sharing algorithm is proposed in this paper with three variables (degree of freedom based on synchronous field oriented control (FOC. A standard three-level space vector pulse width modulation (SVPWM by nearest three vectors (NTVs approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/Simulink-PLECS software of whole ac drive system by observing the dynamic behaviors in different designed condition. Set of results are provided in this paper, which confirms a good agreement with theoretical development.

  7. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  8. Forecasting systems reliability based on support vector regression with genetic algorithms

    International Nuclear Information System (INIS)

    Chen, K.-Y.

    2007-01-01

    This study applies a novel neural-network technique, support vector regression (SVR), to forecast reliability in engine systems. The aim of this study is to examine the feasibility of SVR in systems reliability prediction by comparing it with the existing neural-network approaches and the autoregressive integrated moving average (ARIMA) model. To build an effective SVR model, SVR's parameters must be set carefully. This study proposes a novel approach, known as GA-SVR, which searches for SVR's optimal parameters using real-value genetic algorithms, and then adopts the optimal parameters to construct the SVR models. A real reliability data for 40 suits of turbochargers were employed as the data set. The experimental results demonstrate that SVR outperforms the existing neural-network approaches and the traditional ARIMA models based on the normalized root mean square error and mean absolute percentage error

  9. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Directory of Open Access Journals (Sweden)

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  10. Vectorization of KENO IV code and an estimate of vector-parallel processing

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

    1986-10-01

    The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

  11. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  12. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  13. 3D magnetization vector inversion based on fuzzy clustering: inversion algorithm, uncertainty analysis, and application to geology differentiation

    Science.gov (United States)

    Sun, J.; Li, Y.

    2017-12-01

    Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to

  14. Annual Electric Load Forecasting by a Least Squares Support Vector Machine with a Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2012-11-01

    Full Text Available The accuracy of annual electric load forecasting plays an important role in the economic and social benefits of electric power systems. The least squares support vector machine (LSSVM has been proven to offer strong potential in forecasting issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. As a novel meta-heuristic and evolutionary algorithm, the fruit fly optimization algorithm (FOA has the advantages of being easy to understand and fast convergence to the global optimal solution. Therefore, to improve the forecasting performance, this paper proposes a LSSVM-based annual electric load forecasting model that uses FOA to automatically determine the appropriate values of the two parameters for the LSSVM model. By taking the annual electricity consumption of China as an instance, the computational result shows that the LSSVM combined with FOA (LSSVM-FOA outperforms other alternative methods, namely single LSSVM, LSSVM combined with coupled simulated annealing algorithm (LSSVM-CSA, generalized regression neural network (GRNN and regression model.

  15. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  16. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-24

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

  17. Automatic inspection of textured surfaces by support vector machines

    Science.gov (United States)

    Jahanbin, Sina; Bovik, Alan C.; Pérez, Eduardo; Nair, Dinesh

    2009-08-01

    Automatic inspection of manufactured products with natural looking textures is a challenging task. Products such as tiles, textile, leather, and lumber project image textures that cannot be modeled as periodic or otherwise regular; therefore, a stochastic modeling of local intensity distribution is required. An inspection system to replace human inspectors should be flexible in detecting flaws such as scratches, cracks, and stains occurring in various shapes and sizes that have never been seen before. A computer vision algorithm is proposed in this paper that extracts local statistical features from grey-level texture images decomposed with wavelet frames into subbands of various orientations and scales. The local features extracted are second order statistics derived from grey-level co-occurrence matrices. Subsequently, a support vector machine (SVM) classifier is trained to learn a general description of normal texture from defect-free samples. This algorithm is implemented in LabVIEW and is capable of processing natural texture images in real-time.

  18. Global restructuring of the CPM-2 transport algorithm for vector and parallel processing

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1989-01-01

    The CPM-2 code is an assembly transport code based on the collision probability (CP) method. It can in principle be applied to global reactor problems, but its excessive computational demands prevent this application. Therefore, a new transport algorithm for CPM-2 has been developed for vector-parallel architectures, which has resulted in an overall factor of 20 speedup (wall clock) on the IBM 3090-600E. This paper presents the detailed results of this effort as well as a brief description of ongoing effort to remove some of the modeling limitations in CPM-2 that inhibit its use for global applications, such as the use of the pure CP treatment and the assumption of isotropic scattering

  19. Dynamic Heat Supply Prediction Using Support Vector Regression Optimized by Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Meiping Wang

    2016-01-01

    Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.

  20. Great Ellipse Route Planning Based on Space Vector

    Directory of Open Access Journals (Sweden)

    LIU Wenchao

    2015-07-01

    Full Text Available Aiming at the problem of navigation error caused by unified earth model in great circle route planning using sphere model and modern navigation equipment using ellipsoid mode, a method of great ellipse route planning based on space vector is studied. By using space vector algebra method, the vertex of great ellipse is solved directly, and description of great ellipse based on major-axis vector and minor-axis vector is presented. Then calculation formulas of great ellipse azimuth and distance are deduced using two basic vectors. Finally, algorithms of great ellipse route planning are studied, especially equal distance route planning algorithm based on Newton-Raphson(N-R method. Comparative examples show that the difference of route planning between great circle and great ellipse is significant, using algorithms of great ellipse route planning can eliminate the navigation error caused by the great circle route planning, and effectively improve the accuracy of navigation calculation.

  1. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  2. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Science.gov (United States)

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  3. Support Vector Regression and Genetic Algorithm for HVAC Optimal Operation

    Directory of Open Access Journals (Sweden)

    Ching-Wei Chen

    2016-01-01

    Full Text Available This study covers records of various parameters affecting the power consumption of air-conditioning systems. Using the Support Vector Machine (SVM, the chiller power consumption model, secondary chilled water pump power consumption model, air handling unit fan power consumption model, and air handling unit load model were established. In addition, it was found that R2 of the models all reached 0.998, and the training time was far shorter than that of the neural network. Through genetic programming, a combination of operating parameters with the least power consumption of air conditioning operation was searched. Moreover, the air handling unit load in line with the air conditioning cooling load was predicted. The experimental results show that for the combination of operating parameters with the least power consumption in line with the cooling load obtained through genetic algorithm search, the power consumption of the air conditioning systems under said combination of operating parameters was reduced by 22% compared to the fixed operating parameters, thus indicating significant energy efficiency.

  4. Vectorization at the KENO-IV code

    International Nuclear Information System (INIS)

    Asai, K.; Higuchi, K.; Katakura, J.

    1986-01-01

    The multigroup criticality safety code KENO-IV has been vectorized and tested on the FACOM VP-100 vector processor. At first, the vectorized KENO-IV on a scalar processor was slower than the original one by a factor of 1.4 because of the overhead introduced by vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 on the vector processor. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed

  5. Maxwell's Multipole Vectors and the CMB

    OpenAIRE

    Weeks, Jeffrey R.

    2004-01-01

    The recently re-discovered multipole vector approach to understanding the harmonic decomposition of the cosmic microwave background traces its roots to Maxwell's Treatise on Electricity and Magnetism. Taking Maxwell's directional derivative approach as a starting point, the present article develops a fast algorithm for computing multipole vectors, with an exposition that is both simpler and better motivated than in the author's previous work. Tests show the resulting algorithm, coded up as a ...

  6. Replacing a native Wolbachia with a novel strain results in an increase in endosymbiont load and resistance to dengue virus in a mosquito vector.

    Directory of Open Access Journals (Sweden)

    Guowu Bian

    Full Text Available Wolbachia is a maternally transmitted endosymbiotic bacterium that is estimated to infect up to 65% of insect species. The ability of Wolbachia to both induce pathogen interference and spread into mosquito vector populations makes it possible to develop Wolbachia as a biological control agent for vector-borne disease control. Although Wolbachia induces resistance to dengue virus (DENV, filarial worms, and Plasmodium in mosquitoes, species like Aedes polynesiensis and Aedes albopictus, which carry native Wolbachia infections, are able to transmit dengue and filariasis. In a previous study, the native wPolA in Ae. polynesiensis was replaced with wAlbB from Ae. albopictus, and resulted in the generation of the transinfected "MTB" strain with low susceptibility for filarial worms. In this study, we compare the dynamics of DENV serotype 2 (DENV-2 within the wild type "APM" strain and the MTB strain of Ae. polynesiensis by measuring viral infection in the mosquito whole body, midgut, head, and saliva at different time points post infection. The results show that wAlbB can induce a strong resistance to DENV-2 in the MTB mosquito. Evidence also supports that this resistance is related to a dramatic increase in Wolbachia density in the MTB's somatic tissues, including the midgut and salivary gland. Our results suggests that replacement of a native Wolbachia with a novel infection could serve as a strategy for developing a Wolbachia-based approach to target naturally infected insects for vector-borne disease control.

  7. Horizontal vectorization of electron repulsion integrals.

    Science.gov (United States)

    Pritchard, Benjamin P; Chow, Edmond

    2016-10-30

    We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD  = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  9. Prediction of Antimicrobial Peptides Based on Sequence Alignment and Support Vector Machine-Pairwise Algorithm Utilizing LZ-Complexity

    Directory of Open Access Journals (Sweden)

    Xin Yi Ng

    2015-01-01

    Full Text Available This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM- LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity.

  10. A Novel Neural Network Vector Control for Single-Phase Grid-Connected Converters with L, LC and LCL Filters

    Directory of Open Access Journals (Sweden)

    Xingang Fu

    2016-04-01

    Full Text Available This paper investigates a novel recurrent neural network (NN-based vector control approach for single-phase grid-connected converters (GCCs with L (inductor, LC (inductor-capacitor and LCL (inductor-capacitor-inductor filters and provides their comparison study with the conventional standard vector control method. A single neural network controller replaces two current-loop PI controllers, and the NN training approximates the optimal control for the single-phase GCC system. The Levenberg–Marquardt (LM algorithm was used to train the NN controller based on the complete system equations without any decoupling policies. The proposed NN approach can solve the decoupling problem associated with the conventional vector control methods for L, LC and LCL-filter-based single-phase GCCs. Both simulation study and hardware experiments demonstrate that the neural network vector controller shows much more improved performance than that of conventional vector controllers, including faster response speed and lower overshoot. Especially, NN vector control could achieve very good performance using low switch frequency. More importantly, the neural network vector controller is a damping free controller, which is generally required by a conventional vector controller for an LCL-filter-based single-phase grid-connected converter and, therefore, can overcome the inefficiency problem caused by damping policies.

  11. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

    Energy Technology Data Exchange (ETDEWEB)

    Jaskowiak, J; Ahmad, S; Ali, I [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

    2015-06-15

    Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were used to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased

  12. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  13. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yan Hong Chen

    2016-01-01

    Full Text Available This paper proposes a new electric load forecasting model by hybridizing the fuzzy time series (FTS and global harmony search algorithm (GHSA with least squares support vector machines (LSSVM, namely GHSA-FTS-LSSVM model. Firstly, the fuzzy c-means clustering (FCS algorithm is used to calculate the clustering center of each cluster. Secondly, the LSSVM is applied to model the resultant series, which is optimized by GHSA. Finally, a real-world example is adopted to test the performance of the proposed model. In this investigation, the proposed model is verified using experimental datasets from the Guangdong Province Industrial Development Database, and results are compared against autoregressive integrated moving average (ARIMA model and other algorithms hybridized with LSSVM including genetic algorithm (GA, particle swarm optimization (PSO, harmony search, and so on. The forecasting results indicate that the proposed GHSA-FTS-LSSVM model effectively generates more accurate predictive results.

  14. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  15. Line Width Recovery after Vectorization of Engineering Drawings

    Directory of Open Access Journals (Sweden)

    Gramblička Matúš

    2016-12-01

    Full Text Available Vectorization is the conversion process of a raster image representation into a vector representation. The contemporary commercial vectorization software applications do not provide sufficiently high quality outputs for such images as do mechanical engineering drawings. Line width preservation is one of the problems. There are applications which need to know the line width after vectorization because this line attribute carries the important semantic information for the next 3D model generation. This article describes the algorithm that is able to recover line width of individual lines in the vectorized engineering drawings. Two approaches are proposed, one examines the line width at three points, whereas the second uses a variable number of points depending on the line length. The algorithm is tested on real mechanical engineering drawings.

  16. Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Qin Yi [School of Physics, University of New South Wales (Australia)]. E-mail: yi.qin@csiro.au; Box, Michael A. [School of Physics, University of New South Wales (Australia)

    2006-01-15

    Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function.

  17. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  18. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  19. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  20. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  1. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  2. The vectorized pinball contact impact routine

    International Nuclear Information System (INIS)

    Belytschko, T.B.; Neal, M.O.

    1989-01-01

    When simulating the impact-penetration of two bodies with explicit finite element methods, some type of interaction or contact algorithm must be included. These algorithms, often called slideline algorithms, must enforce the constraint that the two bodies cannot occupy the same space at the same time. Lagrange multiplier, penalty, and projection techniques have all been proposed to enforce this added constraint. For problems which include large relative motions between the two bodies and erosion of elements, it becomes difficult and time consuming to keep track of which elements of the bodies should be involved in the impact calculations. This computational expense is magnified by the fact that these slideline algorithms have many branches which are not amenable to vectorization. In dynamic finite element simulations with explicit time integration, many of the element and nodal calculations can be vectorized and the slideline calculations can require a considerable percentage of the total computation time. The thrust of the pinball algorithm discussed in this paper is to allow vectorization of as much of the slideline calculations as possible. This is accomplished by greatly simplifying both the search for the elements involved in the impact and in the enforcement of impenetrability with the use of spheres, or pinballs, for each element in the slideline calculations. In this way, the search requires a simple check on the distances between elements to determine if contact has been made. Once the contacting pairs of elements have been determined with a single global search of the two slidelines, the impenetrability condition is enforced with the use of a penalty type formulation which can be completely vectorized

  3. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  4. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Chih-Feng Chao

    2015-01-01

    Full Text Available The setting of parameters in the support vector machines (SVMs is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM. This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI, machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM. The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.

  5. From vectors to mnesors

    OpenAIRE

    Champenois, Gilles

    2007-01-01

    The mnesor theory is the adaptation of vectors to artificial intelligence. The scalar field is replaced by a lattice. Addition becomes idempotent and multiplication is interpreted as a selection operation. We also show that mnesors can be the foundation for a linear calculus.

  6. On the Vectorization of FIR Filterbanks

    Directory of Open Access Journals (Sweden)

    Barbedo Jayme Garcia Arnal

    2007-01-01

    Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and , in order to explore different aspects of the proposed technique.

  7. On the Vectorization of FIR Filterbanks

    Directory of Open Access Journals (Sweden)

    Amauri Lopes

    2007-01-01

    Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and C, in order to explore different aspects of the proposed technique.

  8. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  9. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  10. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    Science.gov (United States)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  11. Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait.

    Science.gov (United States)

    Martins, Maria; Costa, Lino; Frizera, Anselmo; Ceres, Ramón; Santos, Cristina

    2014-03-01

    Walker devices are often prescribed incorrectly to patients, leading to the increase of dissatisfaction and occurrence of several problems, such as, discomfort and pain. Thus, it is necessary to objectively evaluate the effects that assisted gait can have on the gait patterns of walker users, comparatively to a non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information that often is difficult to interpret. This study addresses the problem of selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. For that purpose, it is presented an efficient approach that combines evolutionary techniques, based on genetic algorithms, and support vector machine algorithms, to discriminate differences between assisted and non-assisted gait with a walker with forearm supports. For comparison purposes, other classification algorithms are verified. Results with healthy subjects show that the main differences are characterized by balance and joints excursion in the sagittal plane. These results, confirmed by clinical evidence, allow concluding that this technique is an efficient feature selection approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    Directory of Open Access Journals (Sweden)

    Ibrahim Baz

    2008-04-01

    Full Text Available This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction, for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD, indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  13. Vector-Quantization using Information Theoretic Concepts

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

    2005-01-01

    interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

  14. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    Directory of Open Access Journals (Sweden)

    Daqing Zhang

    2015-01-01

    Full Text Available Blood-brain barrier (BBB is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration.

  15. Vectorization and multitasking with a Monte-Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-04-01

    This paper summarizes two improvements of a Monte Carlo code by resorting to vectorization and multitasking techniques. After a short presentation of the physical problem to solve and a description of the main difficulties to produce an efficient coding, this paper introduces the vectorization principles employed and briefly describes how the vectorized algorithm works. Next, measured performances on CRAY 1S, CYBER 205 and CRAY X-MP are compared. The second part of this paper is devoted to multitasking technique. Starting from the standard multitasking tools available with FORTRAN on CRAY X-MP/4, a multitasked algorithm and its measured speed-ups are presented. In conclusion we prove that vector and parallel computers are a great opportunity for such Monte Carlo algorithms

  16. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    Science.gov (United States)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  17. Vector condensate model of electroweak interactions

    International Nuclear Information System (INIS)

    Cynolter, G.; Pocsik, G.

    1997-01-01

    Motivated by the fact that the Higgs is not seen, a new version of the standard model is proposed where the scalar doublet is replaced by a vector doublet and its neutral member forms a nonvanishing condensate. Gauge fields are coupled to the new vector fields B in a gauge invariant way leading to mass terms for the gauge fields by condensation. The model is presented and some implications are discussed. (K.A.)

  18. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  19. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  20. Preliminary study on helical CT algorithms for patient motion estimation and compensation

    International Nuclear Information System (INIS)

    Wang, G.; Vannier, M.W.

    1995-01-01

    Helical computed tomography (helical/spiral CT) has replaced conventional CT in many clinical applications. In current helical CT, a patient is assumed to be rigid and motionless during scanning and planar projection sets are produced from raw data via longitudinal interpolation. However, rigid patient motion is a problem in some cases (such as in the skull base and temporal bone imaging). Motion artifacts thus generated in reconstructed images can prevent accurate diagnosis. Modeling a uniform translational movement, the authors address how patient motion is ascertained and how it may be compensated. First, mismatch between adjacent fan-beam projections of the same orientation is determined via classical correlation, which is approximately proportional to the patient displacement projected onto an axis orthogonal to the central ray of the involved fan-beam. Then, the patient motion vector (the patient displacement per gantry rotation) is estimated from its projections using a least-square-root method. To suppress motion artifacts, adaptive interpolation algorithms are developed that synthesize full-scan and half-scan planar projection data sets, respectively. In the adaptive scheme, the interpolation is performed along inclined paths dependent upon the patient motion vector. The simulation results show that the patient motion vector can be accurately and reliably estimated using their correlation and least-square-root algorithm, patient motion artifacts can be effectively suppressed via adaptive interpolation, and adaptive half-scan interpolation is advantageous compared with its full-scale counterpart in terms of high contrast image resolution

  1. Attenuated Vector Tomography -- An Approach to Image Flow Vector Fields with Doppler Ultrasonic Imaging

    International Nuclear Information System (INIS)

    Huang, Qiu; Peng, Qiyu; Huang, Bin; Cheryauka, Arvi; Gullberg, Grant T.

    2008-01-01

    The measurement of flow obtained using continuous wave Doppler ultrasound is formulated as a directional projection of a flow vector field. When a continuous ultrasound wave bounces against a flowing particle, a signal is backscattered. This signal obtains a Doppler frequency shift proportional to the speed of the particle along the ultrasound beam. This occurs for each particle along the beam, giving rise to a Doppler velocity spectrum. The first moment of the spectrum provides the directional projection of the flow along the ultrasound beam. Signals reflected from points further away from the detector will have lower amplitude than signals reflected from points closer to the detector. The effect is very much akin to that modeled by the attenuated Radon transform in emission computed tomography.A least-squares method was adopted to reconstruct a 2D vector field from directional projection measurements. Attenuated projections of only the longitudinal projections of the vector field were simulated. The components of the vector field were reconstructed using the gradient algorithm to minimize a least-squares criterion. This result was compared with the reconstruction of longitudinal projections of the vector field without attenuation. If attenuation is known, the algorithm was able to accurately reconstruct both components of the full vector field from only one set of directional projection measurements. A better reconstruction was obtained with attenuation than without attenuation implying that attenuation provides important information for the reconstruction of flow vector fields.This confirms previous work where we showed that knowledge of the attenuation distribution helps in the reconstruction of MRI diffusion tensor fields from fewer than the required measurements. In the application of ultrasound the attenuation distribution is obtained with pulse wave transmission computed tomography and flow information is obtained with continuous wave Doppler

  2. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  3. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  4. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  5. ALGORITHMS FOR TETRAHEDRAL NETWORK (TEN) GENERATION

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The Tetrahedral Network(TEN) is a powerful 3-D vector structure in GIS, which has a lot of advantages such as simple structure, fast topological relation processing and rapid visualization. The difficulty of TEN application is automatic creating data structure. Al though a raster algorithm has been introduced by some authors, the problems in accuracy, memory requirement, speed and integrity are still existent. In this paper, the raster algorithm is completed and a vector algorithm is presented after a 3-D data model and structure of TEN have been introducted. Finally, experiment, conclusion and future work are discussed.

  6. Vectorization of linear discrete filtering algorithms

    Science.gov (United States)

    Schiess, J. R.

    1977-01-01

    Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.

  7. A Turn-Projected State-Based Conflict Resolution Algorithm

    Science.gov (United States)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  8. Advances in the replacement and enhanced replacement method in QSAR and QSPR theories.

    Science.gov (United States)

    Mercader, Andrew G; Duchowicz, Pablo R; Fernández, Francisco M; Castro, Eduardo A

    2011-07-25

    The selection of an optimal set of molecular descriptors from a much greater pool of such regression variables is a crucial step in the development of QSAR and QSPR models. The aim of this work is to further improve this important selection process. For this reason three different alternatives for the initial steps of our recently developed enhanced replacement method (ERM) and replacement method (RM) are proposed. These approaches had previously proven to yield near optimal results with a much smaller number of linear regressions than the full search. The algorithms were tested on four different experimental data sets, formed by collections of 116, 200, 78, and 100 experimental records from different compounds and 1268, 1338, 1187, and 1306 molecular descriptors, respectively. The comparisons showed that one of the new alternatives further improves the ERM, which has shown to be superior to genetic algorithms for the selection of an optimal set of molecular descriptors from a much greater pool. The new proposed alternative also improves the simpler and the lower computational demand algorithm RM.

  9. DNA Minicircle Technology Improves Purity of Adeno-associated Viral Vector Preparations

    Directory of Open Access Journals (Sweden)

    Maria Schnödt

    2016-01-01

    Full Text Available Adeno-associated viral (AAV vectors are considered as one of the most promising delivery systems in human gene therapy. In addition, AAV vectors are frequently applied tools in preclinical and basic research. Despite this success, manufacturing pure AAV vector preparations remains a difficult task. While empty capsids can be removed from vector preparations owing to their lower density, state-of-the-art purification strategies as of yet failed to remove antibiotic resistance genes or other plasmid backbone sequences. Here, we report the development of minicircle (MC constructs to replace AAV vector and helper plasmids for production of both, single-stranded (ss and self-complementary (sc AAV vectors. As bacterial backbone sequences are removed during MC production, encapsidation of prokaryotic plasmid backbone sequences is avoided. This is of particular importance for scAAV vector preparations, which contained an unproportionally high amount of plasmid backbone sequences (up to 26.1% versus up to 2.9% (ssAAV. Replacing standard packaging plasmids by MC constructs not only allowed to reduce these contaminations below quantification limit, but in addition improved transduction efficiencies of scAAV preparations up to 30-fold. Thus, MC technology offers an easy to implement modification of standard AAV packaging protocols that significantly improves the quality of AAV vector preparations.

  10. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  11. A study of biorthogonal multiple vector-valued wavelets

    International Nuclear Information System (INIS)

    Han Jincang; Cheng Zhengxing; Chen Qingjiang

    2009-01-01

    The notion of vector-valued multiresolution analysis is introduced and the concept of biorthogonal multiple vector-valued wavelets which are wavelets for vector fields, is introduced. It is proved that, like in the scalar and multiwavelet case, the existence of a pair of biorthogonal multiple vector-valued scaling functions guarantees the existence of a pair of biorthogonal multiple vector-valued wavelet functions. An algorithm for constructing a class of compactly supported biorthogonal multiple vector-valued wavelets is presented. Their properties are investigated by means of operator theory and algebra theory and time-frequency analysis method. Several biorthogonality formulas regarding these wavelet packets are obtained.

  12. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  13. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Isomorphism Theorem on Vector Spaces over a Ring

    Directory of Open Access Journals (Sweden)

    Futa Yuichi

    2017-10-01

    Full Text Available In this article, we formalize in the Mizar system [1, 4] some properties of vector spaces over a ring. We formally prove the first isomorphism theorem of vector spaces over a ring. We also formalize the product space of vector spaces. ℤ-modules are useful for lattice problems such as LLL (Lenstra, Lenstra and Lovász [5] base reduction algorithm and cryptographic systems [6, 2].

  15. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

    International Nuclear Information System (INIS)

    Majumdar, A.; Makowitz, H.

    1987-10-01

    With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

  16. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  17. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

    DEFF Research Database (Denmark)

    Padmanaban, Sanjeevikumar; Grandi, Gabriele; Blaabjerg, Frede

    2015-01-01

    This paper considered a six-phase (asymmetrical) induction motor, kept 30 phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs) and all four dc sources are deliberately kept isolated......) by nearest three vectors (NTVs) approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/ Simulink-PLECS software) of whole ac drive system by observing the dynamic behaviors in different designed condition. Set...

  18. Exact Solutions for Internuclear Vectors and Backbone Dihedral Angles from NH Residual Dipolar Couplings in Two Media, and their Application in a Systematic Search Algorithm for Determining Protein Backbone Structure

    International Nuclear Information System (INIS)

    Wang Lincong; Donald, Bruce Randall

    2004-01-01

    We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics

  19. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for a Thin Solenoid with Uniform Current Density

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential Aθ is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of the algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing Bz and Aθ become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r2 in the some of the expressions.

  20. The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

    Directory of Open Access Journals (Sweden)

    Wu Qidi

    2014-01-01

    Full Text Available Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

  1. Approach to Accelerating Dissolved Vector Buffer Generation in Distributed In-Memory Cluster Architecture

    Directory of Open Access Journals (Sweden)

    Jinxin Shen

    2018-01-01

    Full Text Available The buffer generation algorithm is a fundamental function in GIS, identifying areas of a given distance surrounding geographic features. Past research largely focused on buffer generation algorithms generated in a stand-alone environment. Moreover, dissolved buffer generation is data- and computing-intensive. In this scenario, the improvement in the stand-alone environment is limited when considering large-scale mass vector data. Nevertheless, recent parallel dissolved vector buffer algorithms suffer from scalability problems, leaving room for further optimization. At present, the prevailing in-memory cluster-computing framework—Spark—provides promising efficiency for computing-intensive analysis; however, it has seldom been researched for buffer analysis. On this basis, we propose a cluster-computing-oriented parallel dissolved vector buffer generating algorithm, called the HPBM, that contains a Hilbert-space-filling-curve-based data partition method, a data skew and cross-boundary objects processing strategy, and a depth-given tree-like merging method. Experiments are conducted in both stand-alone and cluster environments using real-world vector data that include points and roads. Compared with some existing parallel buffer algorithms, as well as various popular GIS software, the HPBM achieves a performance gain of more than 50%.

  2. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  3. Adaptation of Rejection Algorithms for a Radar Clutter

    Directory of Open Access Journals (Sweden)

    D. Popov

    2017-09-01

    Full Text Available In this paper, the algorithms for adaptive rejection of a radar clutter are synthesized for the case of a priori unknown spectral-correlation characteristics at wobbulation of a repetition period of the radar signal. The synthesis of algorithms for the non-recursive adaptive rejection filter (ARF of a given order is reduced to determination of the vector of weighting coefficients, which realizes the best effectiveness index for radar signal extraction from the moving targets on the background of the received clutter. As the effectiveness criterion, we consider the averaged (over the Doppler signal phase shift improvement coefficient for a signal-to-clutter ratio (SCR. On the base of extreme properties of the characteristic numbers (eigennumbers of the matrices, the optimal vector (according to this criterion maximum is defined as the eigenvector of the clutter correlation matrix corresponding to its minimal eigenvalue. The general type of the vector of optimal ARF weighting coefficients is de-termined and specific adaptive algorithms depending upon the ARF order are obtained, which in the specific cases can be reduced to the known algorithms confirming its authenticity. The comparative analysis of the synthesized and known algorithms is performed. Significant bene-fits are established in clutter rejection effectiveness by the offered processing algorithms compared to the known processing algorithms.

  4. Direct vector controlled six-phase asymmetrical induction motor with power balanced space vector PWM multilevel operation

    DEFF Research Database (Denmark)

    Padmanaban, Sanjeevi Kumar; Grandi, Gabriele; Ojo, Joseph Olorunfemi

    2016-01-01

    In this paper, a six-phase (asymmetrical) machine is investigated, 300 phase displacement is set between two three-phase stator windings keeping deliberately in open-end configuration. Power supply consists of four classical three-phase voltage inverters (VSIs), each one connected to the open......-winding terminals. An original synchronous field oriented control (FOC) algorithm with three variables as degree of freedom is proposed, allowing power sharing among the four VSIs in symmetric/asymmetric conditions. A standard three-level space vector pulse width modulation (SVPWM) by nearest three vector (NTV......) approach was adopted for each couple of VSIs to operate as multilevel output voltage generators. The proposed power sharing algorithm is verified for the ac drive system by observing the dynamic behaviours in different set conditions by complete simulation modelling in software (Matlab...

  5. ILUCG algorithm which minimizes in the Euclidean norm

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1978-07-01

    An algroithm is presented which solves sparse systems of linear equations of the form Ax = Y, where A is non-symmetric, by the Incomplete LU Decomposition-Conjugate Gradient (ILUCG) method. The algorithm minimizes the error in the Euclidean norm vertical bar x/sub i/ - x vertical bar 2 , where x/sub i/ is the solution vector after the i/sup th/ iteration and x the exact solution vector. The results of a test on one real problem indicate that the algorithm is likely to be competitive with the best existing algorithms of its type

  6. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  7. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

    Directory of Open Access Journals (Sweden)

    P. Kuppusamy

    2014-09-01

    Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

  8. A new model of flavonoids affinity towards P-glycoprotein: genetic algorithm-support vector machine with features selected by a modified particle swarm optimization algorithm.

    Science.gov (United States)

    Cui, Ying; Chen, Qinggang; Li, Yaxiao; Tang, Ling

    2017-02-01

    Flavonoids exhibit a high affinity for the purified cytosolic NBD (C-terminal nucleotide-binding domain) of P-glycoprotein (P-gp). To explore the affinity of flavonoids for P-gp, quantitative structure-activity relationship (QSAR) models were developed using support vector machines (SVMs). A novel method coupling a modified particle swarm optimization algorithm with random mutation strategy and a genetic algorithm coupled with SVM was proposed to simultaneously optimize the kernel parameters of SVM and determine the subset of optimized features for the first time. Using DRAGON descriptors to represent compounds for QSAR, three subsets (training, prediction and external validation set) derived from the dataset were employed to investigate QSAR. With excluding of the outlier, the correlation coefficient (R 2 ) of the whole training set (training and prediction) was 0.924, and the R 2 of the external validation set was 0.941. The root-mean-square error (RMSE) of the whole training set was 0.0588; the RMSE of the cross-validation of the external validation set was 0.0443. The mean Q 2 value of leave-many-out cross-validation was 0.824. With more informations from results of randomization analysis and applicability domain, the proposed model is of good predictive ability, stability.

  9. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  10. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  11. Feature Selection and Parameter Optimization of Support Vector Machines Based on Modified Artificial Fish Swarm Algorithms

    Directory of Open Access Journals (Sweden)

    Kuan-Cheng Lin

    2015-01-01

    Full Text Available Rapid advances in information and communication technology have made ubiquitous computing and the Internet of Things popular and practicable. These applications create enormous volumes of data, which are available for analysis and classification as an aid to decision-making. Among the classification methods used to deal with big data, feature selection has proven particularly effective. One common approach involves searching through a subset of the features that are the most relevant to the topic or represent the most accurate description of the dataset. Unfortunately, searching through this kind of subset is a combinatorial problem that can be very time consuming. Meaheuristic algorithms are commonly used to facilitate the selection of features. The artificial fish swarm algorithm (AFSA employs the intelligence underlying fish swarming behavior as a means to overcome optimization of combinatorial problems. AFSA has proven highly successful in a diversity of applications; however, there remain shortcomings, such as the likelihood of falling into a local optimum and a lack of multiplicity. This study proposes a modified AFSA (MAFSA to improve feature selection and parameter optimization for support vector machine classifiers. Experiment results demonstrate the superiority of MAFSA in classification accuracy using subsets with fewer features for given UCI datasets, compared to the original FASA.

  12. Conjugate gradient algorithms using multiple recursions

    Energy Technology Data Exchange (ETDEWEB)

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  13. Performansi Algoritma CODEQ dalam Penyelesaian Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Annisa Kesy Garside

    2014-01-01

    Full Text Available Genetic Algorithm, Tabu Search, Simulated Annealing, and Ant Colony Optimization showed a good performance in solving vehicle routing problem. However, the generated solution of those algorithms was changeable regarding on the input parameter of each algorithm. CODEQ is a new, parameter free meta-heuristic algorithm that had been successfully used to solve constrained optimization problems, integer programming, and feed-forward neural network. The purpose of this research are improving CODEQ algorithm to solve vehicle routing problem and testing the performance of the improved algorithm. CODEQ algorithm is started with population initiation as initial solution, generated of mutant vector for each parent in every iteration, replacement of parent by mutant when fitness function value of mutant is better than parent’s, generated of new vector for each iteration based on opposition value or chaos principle, replacement of worst solution by new vector when fitness function value of new vector is better, iteration ceasing when stooping criterion is achieved, and sub-tour determination based on vehicle capacity constraint. The result showed that the average deviation of the best-known and the best-test value is 6.35%. Therefore, CODEQ algorithm is good in solving vehicle routing problem.

  14. Vectorization of Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.

    1989-01-01

    This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP

  15. Lyapunov Function Synthesis - Algorithm and Software

    DEFF Research Database (Denmark)

    Leth, Tobias; Sloth, Christoffer; Wisniewski, Rafal

    2016-01-01

    In this paper we introduce an algorithm for the synthesis of polynomial Lyapunov functions for polynomial vector fields. The Lyapunov function is a continuous piecewisepolynomial defined on simplices, which compose a collection of simplices. The algorithm is elaborated and crucial features are ex...

  16. Some Algorithms for the Conditional Mean Vector and Covariance Matrix

    Directory of Open Access Journals (Sweden)

    John F. Monahan

    2006-08-01

    Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.

  17. A Hybrid Seasonal Mechanism with a Chaotic Cuckoo Search Algorithm with a Support Vector Regression Model for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yongquan Dong

    2018-04-01

    Full Text Available Providing accurate electric load forecasting results plays a crucial role in daily energy management of the power supply system. Due to superior forecasting performance, the hybridizing support vector regression (SVR model with evolutionary algorithms has received attention and deserves to continue being explored widely. The cuckoo search (CS algorithm has the potential to contribute more satisfactory electric load forecasting results. However, the original CS algorithm suffers from its inherent drawbacks, such as parameters that require accurate setting, loss of population diversity, and easy trapping in local optima (i.e., premature convergence. Therefore, proposing some critical improvement mechanisms and employing an improved CS algorithm to determine suitable parameter combinations for an SVR model is essential. This paper proposes the SVR with chaotic cuckoo search (SVRCCS model based on using a tent chaotic mapping function to enrich the cuckoo search space and diversify the population to avoid trapping in local optima. In addition, to deal with the cyclic nature of electric loads, a seasonal mechanism is combined with the SVRCCS model, namely giving a seasonal SVR with chaotic cuckoo search (SSVRCCS model, to produce more accurate forecasting performances. The numerical results, tested by using the datasets from the National Electricity Market (NEM, Queensland, Australia and the New York Independent System Operator (NYISO, NY, USA, show that the proposed SSVRCCS model outperforms other alternative models.

  18. Selection vector filter framework

    Science.gov (United States)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  19. Community detection in complex networks using proximate support vector clustering

    Science.gov (United States)

    Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing

    2018-03-01

    Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.

  20. Optimized support vector regression for drilling rate of penetration estimation

    Science.gov (United States)

    Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa

    2015-12-01

    In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.

  1. Integrating Transgenic Vector Manipulation with Clinical Interventions to Manage Vector-Borne Diseases.

    Directory of Open Access Journals (Sweden)

    Kenichi W Okamoto

    2016-03-01

    Full Text Available Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted. We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences-consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even

  2. Short-Term Load Forecasting Based on Wavelet Transform and Least Squares Support Vector Machine Optimized by Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Electric power is a kind of unstorable energy concerning the national welfare and the people’s livelihood, the stability of which is attracting more and more attention. Because the short-term power load is always interfered by various external factors with the characteristics like high volatility and instability, a single model is not suitable for short-term load forecasting due to low accuracy. In order to solve this problem, this paper proposes a new model based on wavelet transform and the least squares support vector machine (LSSVM which is optimized by fruit fly algorithm (FOA for short-term load forecasting. Wavelet transform is used to remove error points and enhance the stability of the data. Fruit fly algorithm is applied to optimize the parameters of LSSVM, avoiding the randomness and inaccuracy to parameters setting. The result of implementation of short-term load forecasting demonstrates that the hybrid model can be used in the short-term forecasting of the power system.

  3. Icing Forecasting of High Voltage Transmission Line Using Weighted Least Square Support Vector Machine with Fireworks Algorithm for Feature Selection

    Directory of Open Access Journals (Sweden)

    Tiannan Ma

    2016-12-01

    Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.

  4. Ultrasound Vector Flow Imaging: Part I: Sequential Systems

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

    2016-01-01

    , and variants of these. The review covers both 2-D and 3-D velocity estimation and gives a historical perspective on the development along with a summary of various vector flow visualization algorithms. The current state-of-the-art is explained along with an overview of clinical studies conducted and methods......The paper gives a review of the most important methods for blood velocity vector flow imaging (VFI) for conventional, sequential data acquisition. This includes multibeam methods, speckle tracking, transverse oscillation, color flow mapping derived vector flow imaging, directional beamforming...

  5. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  6. Multi-robot task allocation based on two dimensional artificial fish swarm algorithm

    Science.gov (United States)

    Zheng, Taixiong; Li, Xueqin; Yang, Liangyi

    2007-12-01

    The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.

  7. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  8. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  9. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  10. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  11. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  12. Application of Hybrid Quantum Tabu Search with Support Vector Regression (SVR for Load Forecasting

    Directory of Open Access Journals (Sweden)

    Cheng-Wen Lee

    2016-10-01

    Full Text Available Hybridizing chaotic evolutionary algorithms with support vector regression (SVR to improve forecasting accuracy is a hot topic in electricity load forecasting. Trapping at local optima and premature convergence are critical shortcomings of the tabu search (TS algorithm. This paper investigates potential improvements of the TS algorithm by applying quantum computing mechanics to enhance the search information sharing mechanism (tabu memory to improve the forecasting accuracy. This article presents an SVR-based load forecasting model that integrates quantum behaviors and the TS algorithm with the support vector regression model (namely SVRQTS to obtain a more satisfactory forecasting accuracy. Numerical examples demonstrate that the proposed model outperforms the alternatives.

  13. Improved stability and performance from sigma-delta modulators using 1-bit vector quantization

    DEFF Research Database (Denmark)

    Risbo, Lars

    1993-01-01

    A novel class of sigma-delta modulators is presented. The usual scalar 1-b quantizer in a sigma-delta modulator is replaced by a 1-b vector quantizer with a N-dimensional input state-vector from the linear feedback filter. Generally, the vector quantizer changes the nonlinear dynamics...... of the modulator, and a proper choice of vector quantizer can improve both system stability and coding performance. It is shown how to construct the vector quantizer in order to limit the excursions in state-space. The proposed method is demonstrated graphically for a simple second-order modulator...

  14. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  15. Identification of human semiochemicals attractive to the major vectors of onchocerciasis.

    Science.gov (United States)

    Young, Ryan M; Burkett-Cadena, Nathan D; McGaha, Tommy W; Rodriguez-Perez, Mario A; Toé, Laurent D; Adeleke, Monsuru A; Sanfo, Moussa; Soungalo, Traore; Katholi, Charles R; Noblet, Raymond; Fadamiro, Henry; Torres-Estrada, Jose L; Salinas-Carmona, Mario C; Baker, Bill; Unnasch, Thomas R; Cupp, Eddie W

    2015-01-01

    Entomological indicators are considered key metrics to document the interruption of transmission of Onchocerca volvulus, the etiological agent of human onchocerciasis. Human landing collection is the standard employed for collection of the vectors for this parasite. Recent studies reported the development of traps that have the potential for replacing humans for surveillance of O. volvulus in the vector population. However, the key chemical components of human odor that are attractive to vector black flies have not been identified. Human sweat compounds were analyzed using GC-MS analysis and compounds common to three individuals identified. These common compounds, with others previously identified as attractive to other hematophagous arthropods were evaluated for their ability to stimulate and attract the major onchocerciasis vectors in Africa (Simulium damnosum sensu lato) and Latin America (Simulium ochraceum s. l.) using electroantennography and a Y tube binary choice assay. Medium chain length carboxylic acids and aldehydes were neurostimulatory for S. damnosum s.l. while S. ochraceum s.l. was stimulated by short chain aliphatic alcohols and aldehydes. Both species were attracted to ammonium bicarbonate and acetophenone. The compounds were shown to be attractive to the relevant vector species in field studies, when incorporated into a formulation that permitted a continuous release of the compound over time and used in concert with previously developed trap platforms. The identification of compounds attractive to the major vectors of O. volvulus will permit the development of optimized traps. Such traps may replace the use of human vector collectors for monitoring the effectiveness of onchocerciasis elimination programs and could find use as a contributing component in an integrated vector control/drug program aimed at eliminating river blindness in Africa.

  16. Partial Transmit Sequence Optimization Using Improved Harmony Search Algorithm for PAPR Reduction in OFDM

    Directory of Open Access Journals (Sweden)

    Mangal Singh

    2017-12-01

    Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.

  17. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  18. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Directory of Open Access Journals (Sweden)

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  19. Phase matching in quantum searching and the improved Grover algorithm

    International Nuclear Information System (INIS)

    Long Guilu; Li Yansong; Xiao Li; Tu Changcun; Sun Yang

    2004-01-01

    The authors briefly introduced some of our recent work related to the phase matching condition in quantum searching algorithms and the improved Grover algorithm. When one replaces the two phase inversions in the Grover algorithm with arbitrary phase rotations, the modified algorithm usually fails in searching the marked state unless a phase matching condition is satisfied between the two phases. the Grover algorithm is not 100% in success rate, an improved Grover algorithm with zero-failure rate is given by replacing the phase inversions with angles that depends on the size of the database. Other aspects of the Grover algorithm such as the SO(3) picture of quantum searching, the dominant gate imperfections in the Grover algorithm are also mentioned. (author)

  20. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    Science.gov (United States)

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2017-01-01

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  1. Control algorithm for the inverter fed induction motor drive with DC current feedback loop based on principles of the vector control

    Energy Technology Data Exchange (ETDEWEB)

    Vuckovic, V.; Vukosavic, S. (Electrical Engineering Inst. Nikola Tesla, Viktora Igoa 3, Belgrade, 11000 (Yugoslavia))

    1992-01-01

    This paper brings out a control algorithm for VSI fed induction motor drives based on the converter DC link current feedback. It is shown that the speed and flux can be controlled over the wide speed and load range quite satisfactorily for simpler drives. The base commands of both the inverter voltage and frequency are proportional to the reference speed, but each of them is further modified by the signals derived from the DC current sensor. The algorithm is based on the equations well known from the vector control theory, and is aimed to obtain the constant rotor flux and proportionality between the electrical torque, the slip frequency and the active component of the stator current. In this way, the problems of slip compensation, Ri compensation and correction of U/f characteristics are solved in the same time. Analytical considerations and computer simulations of the proposed control structure are in close agreement with the experimental results measured on a prototype drive.

  2. Experimental Evaluation of Integral Transformations for Engineering Drawings Vectorization

    Directory of Open Access Journals (Sweden)

    Vaský Jozef

    2014-12-01

    Full Text Available The concept of digital manufacturing supposes application of digital technologies in the whole product life cycle. Direct digital manufacturing includes such information technology processes, where products are directly manufactured from 3D CAD model. In digital manufacturing, engineering drawing is replaced by CAD product model. In the contemporary practice, lots of engineering paper-based drawings are still archived. They could be digitalized by scanner and stored to one of the raster graphics format and after that vectorized for interactive editing in the specific software system for technical drawing or for archiving in some of the standard vector graphics file format. The vector format is suitable for 3D model generating, too.The article deals with using of selected integral transformations (Fourier, Hough in the phase of digitalized raster engineering drawings vectorization.

  3. Extended SVM algorithms for multilevel trans-Z-source inverter

    Directory of Open Access Journals (Sweden)

    Aida Baghbany Oskouei

    2016-03-01

    Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.

  4. Introduction to Vector Field Visualization

    Science.gov (United States)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  5. QSPR studies for predicting polarity parameter of organic compounds in methanol using support vector machine and enhanced replacement method.

    Science.gov (United States)

    Golmohammadi, H; Dashtbozorgi, Z

    2016-12-01

    In the present work, enhanced replacement method (ERM) and support vector machine (SVM) were used for quantitative structure-property relationship (QSPR) studies of polarity parameter (p) of various organic compounds in methanol in reversed phase liquid chromatography based on molecular descriptors calculated from the optimized structures. Diverse kinds of molecular descriptors were calculated to encode the molecular structures of compounds, such as geometric, thermodynamic, electrostatic and quantum mechanical descriptors. The variable selection method of ERM was employed to select an optimum subset of descriptors. The five descriptors selected using ERM were used as inputs of SVM to predict the polarity parameter of organic compounds in methanol. The coefficient of determination, r 2 , between experimental and predicted polarity parameters for the prediction set by ERM and SVM were 0.952 and 0.982, respectively. Acceptable results specified that the ERM approach is a very effective method for variable selection and the predictive aptitude of the SVM model is superior to those obtained by ERM. The obtained results demonstrate that SVM can be used as a substitute influential modeling tool for QSPR studies.

  6. Vectorization of three-dimensional neutron diffusion code CITATION

    International Nuclear Information System (INIS)

    Harada, Hiroo; Ishiguro, Misako

    1985-01-01

    Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)

  7. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

    Directory of Open Access Journals (Sweden)

    Wang Lily

    2008-07-01

    Full Text Available Abstract Background Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.

  8. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  9. Support Vector Regression-Based Adaptive Divided Difference Filter for Nonlinear State Estimation Problems

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2014-01-01

    Full Text Available We present a support vector regression-based adaptive divided difference filter (SVRADDF algorithm for improving the low state estimation accuracy of nonlinear systems, which are typically affected by large initial estimation errors and imprecise prior knowledge of process and measurement noises. The derivative-free SVRADDF algorithm is significantly simpler to compute than other methods and is implemented using only functional evaluations. The SVRADDF algorithm involves the use of the theoretical and actual covariance of the innovation sequence. Support vector regression (SVR is employed to generate the adaptive factor to tune the noise covariance at each sampling instant when the measurement update step executes, which improves the algorithm’s robustness. The performance of the proposed algorithm is evaluated by estimating states for (i an underwater nonmaneuvering target bearing-only tracking system and (ii maneuvering target bearing-only tracking in an air-traffic control system. The simulation results show that the proposed SVRADDF algorithm exhibits better performance when compared with a traditional DDF algorithm.

  10. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    Science.gov (United States)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  11. A quick survey of text categorization algorithms

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

  12. Identification of Human Semiochemicals Attractive to the Major Vectors of Onchocerciasis

    Science.gov (United States)

    Young, Ryan M.; Burkett-Cadena, Nathan D.; McGaha, Tommy W.; Rodriguez-Perez, Mario A.; Toé, Laurent D.; Adeleke, Monsuru A.; Sanfo, Moussa; Soungalo, Traore; Katholi, Charles R.; Noblet, Raymond; Fadamiro, Henry; Torres-Estrada, Jose L.; Salinas-Carmona, Mario C.; Baker, Bill; Unnasch, Thomas R.; Cupp, Eddie W.

    2015-01-01

    Background Entomological indicators are considered key metrics to document the interruption of transmission of Onchocerca volvulus, the etiological agent of human onchocerciasis. Human landing collection is the standard employed for collection of the vectors for this parasite. Recent studies reported the development of traps that have the potential for replacing humans for surveillance of O. volvulus in the vector population. However, the key chemical components of human odor that are attractive to vector black flies have not been identified. Methodology/Principal Findings Human sweat compounds were analyzed using GC-MS analysis and compounds common to three individuals identified. These common compounds, with others previously identified as attractive to other hematophagous arthropods were evaluated for their ability to stimulate and attract the major onchocerciasis vectors in Africa (Simulium damnosum sensu lato) and Latin America (Simulium ochraceum s. l.) using electroantennography and a Y tube binary choice assay. Medium chain length carboxylic acids and aldehydes were neurostimulatory for S. damnosum s.l. while S. ochraceum s.l. was stimulated by short chain aliphatic alcohols and aldehydes. Both species were attracted to ammonium bicarbonate and acetophenone. The compounds were shown to be attractive to the relevant vector species in field studies, when incorporated into a formulation that permitted a continuous release of the compound over time and used in concert with previously developed trap platforms. Conclusions/Significance The identification of compounds attractive to the major vectors of O. volvulus will permit the development of optimized traps. Such traps may replace the use of human vector collectors for monitoring the effectiveness of onchocerciasis elimination programs and could find use as a contributing component in an integrated vector control/drug program aimed at eliminating river blindness in Africa. PMID:25569240

  13. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

    OpenAIRE

    Zuev, Yu. A.

    2003-01-01

    The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

  14. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    International Nuclear Information System (INIS)

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

    2012-01-01

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm 3 (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm 3 (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  15. Vector-Sensor MUSIC for Polarized Seismic Sources Localization

    Directory of Open Access Journals (Sweden)

    Jérôme I. Mars

    2005-01-01

    Full Text Available This paper addresses the problem of high-resolution polarized source detection and introduces a new eigenstructure-based algorithm that yields direction of arrival (DOA and polarization estimates using a vector-sensor (or multicomponent-sensor array. This method is based on separation of the observation space into signal and noise subspaces using fourth-order tensor decomposition. In geophysics, in particular for reservoir acquisition and monitoring, a set of Nx-multicomponent sensors is laid on the ground with constant distance Δx between them. Such a data acquisition scheme has intrinsically three modes: time, distance, and components. The proposed method needs multilinear algebra in order to preserve data structure and avoid reorganization. The data is thus stored in tridimensional arrays rather than matrices. Higher-order eigenvalue decomposition (HOEVD for fourth-order tensors is considered to achieve subspaces estimation and to compute the eigenelements. We propose a tensorial version of the MUSIC algorithm for a vector-sensor array allowing a joint estimation of DOA and signal polarization estimation. Performances of the proposed algorithm are evaluated.

  16. Digital video steganalysis using motion vector recovery-based features.

    Science.gov (United States)

    Deng, Yu; Wu, Yunjie; Zhou, Linna

    2012-07-10

    As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates.

  17. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  18. CAS algorithm-based optimum design of PID controller in AVR system

    International Nuclear Information System (INIS)

    Zhu Hui; Li Lixiang; Zhao Ying; Guo Yu; Yang Yixian

    2009-01-01

    This paper presents a novel design method for determining the optimal PID controller parameters of an automatic voltage regulator (AVR) system using the chaotic ant swarm (CAS) algorithm. In the tuning process of parameters, the CAS algorithm is iterated to give the optimal parameters of the PID controller based on the fitness theory, where the position vector of each ant in the CAS algorithm corresponds to the parameter vector of the PID controller. The proposed CAS-PID controllers can ensure better control system performance with respect to the reference input in comparison with GA-PID controllers. Numerical simulations are provided to verify the effectiveness and feasibility of PID controller based on CAS algorithm.

  19. Parton-shower matching systematics in vector-boson-fusion WW production

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, Michael [Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Plaetzer, Simon [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-05-15

    We perform a detailed analysis of next-to-leading order plus parton-shower matching in vector-boson-fusion WW production including leptonic decays. The study is performed in the Herwig 7 framework interfaced to VBFNLO 3, using the angular-ordered and dipole-based parton-shower algorithms combined with the subtractive and multiplicative-matching algorithms. (orig.)

  20. Generalized space vector control for current source inverters and rectifiers

    Directory of Open Access Journals (Sweden)

    Roseline J. Anitha

    2016-06-01

    Full Text Available Current source inverters (CSI is one of the widely used converter topology in medium voltage drive applications due to its simplicity, motor friendly waveforms and reliable short circuit protection. The current source inverters are usually fed by controlled current source rectifiers (CSR with a large inductor to provide a constant supply current. A generalized control applicable for both CSI and CSR and their extension namely current source multilevel inverters (CSMLI are dealt in this paper. As space vector pulse width modulation (SVPWM features the advantages of flexible control, faster dynamic response, better DC utilization and easy digital implementation it is considered for this work. This paper generalizes SVPWM that could be applied for CSI, CSR and CSMLI. The intense computation involved in framing a generalized space vector control are discussed in detail. The algorithm includes determination of band, region, subregions and vectors. The algorithm is validated by simulation using MATLAB /SIMULINK for CSR 5, 7, 13 level CSMLI and for CSR fed CSI.

  1. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  2. Development of a NEW Vector Magnetograph at Marshall Space Flight Center

    Science.gov (United States)

    West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.

  3. On flexible CAD of adaptive control and identification algorithms

    DEFF Research Database (Denmark)

    Christensen, Anders; Ravn, Ole

    1988-01-01

    a total redesign of the system within each sample. The necessary design parameters are evaluated and a decision vector is defined, from which the identification algorithm can be generated by the program. Using the decision vector, a decision-node tree structure is built up, where the nodes define......SLLAB is a MATLAB-family software package for solving control and identification problems. This paper concerns the planning of a general-purpose subroutine structure for solving identification and adaptive control problems. A general-purpose identification algorithm is suggested, which allows...

  4. Vectorization of the KENO V.a criticality safety code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Dodds, H.L.; Petrie, L.M.

    1991-01-01

    The development of the vector processor, which is used in the current generation of supercomputers and is beginning to be used in workstations, provides the potential for dramatic speed-up for codes that are able to process data as vectors. Unfortunately, the stochastic nature of Monte Carlo codes prevents the old scalar version of these codes from taking advantage of the vector processors. New Monte Carlo algorithms that process all the histories undergoing the same event as a batch are required. Recently, new vectorized Monte Carlo codes have been developed that show significant speed-ups when compared to the scalar version of themselves or equivalent codes. This paper discusses the vectorization of an already existing and widely used criticality safety code, KENO V.a All the changes made to KENO V.a are transparent to the user making it possible to upgrade from the standard scalar version of KENO V.a to the vectorized version without learning a new code

  5. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  6. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    Directory of Open Access Journals (Sweden)

    Chih-Feng Chao

    2015-01-01

    Full Text Available Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  7. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  8. Efficient combination of a 3D Quasi-Newton inversion algorithm and a vector dual-primal finite element tearing and interconnecting method

    International Nuclear Information System (INIS)

    Voznyuk, I; Litman, A; Tortel, H

    2015-01-01

    A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database. (paper)

  9. A hybrid frame concealment algorithm for H.264/AVC.

    Science.gov (United States)

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  10. Infinite ensemble of support vector machines for prediction of ...

    African Journals Online (AJOL)

    user

    the support vector machines (SVMs), a machine learning algorithm used ... work designs so that specific, quantitative workplace assessments can be made ... with SVMs can be obtained by embedding the base learners (hypothesis) into a.

  11. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  12. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm

    Directory of Open Access Journals (Sweden)

    Yuichi Sakumura

    2017-02-01

    Full Text Available Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs at very low concentrations (ppb level. We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls using gas chromatography/mass spectrometry (GC/MS, and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH3CN, isoprene, 1-propanol is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer.

  13. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm.

    Science.gov (United States)

    Sakumura, Yuichi; Koyama, Yutaro; Tokutake, Hiroaki; Hida, Toyoaki; Sato, Kazuo; Itoh, Toshio; Akamatsu, Takafumi; Shin, Woosuck

    2017-02-04

    Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs) at very low concentrations (ppb level). We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls) using gas chromatography/mass spectrometry (GC/MS), and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM) algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH₃CN, isoprene, 1-propanol) is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer.

  14. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  15. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  16. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 1

    International Nuclear Information System (INIS)

    Ernenwein, R.; Rohmer, M.M.; Benard, M.

    1990-01-01

    We present a program system for ab initio molecular orbital calculations on vector and parallel computers. The present article is devoted to the computation of one- and two-electron integrals over contracted Gaussian basis sets involving s-, p-, d- and f-type functions. The McMurchie and Davidson (MMD) algorithm has been implemented and parallelized by distributing over a limited number of logical tasks the calculation of the 55 relevant classes of integrals. All sections of the MMD algorithm have been efficiently vectorized, leading to a scalar/vector ratio of 5.8. Different algorithms are proposed and compared for an optimal vectorization of the contraction of the 'intermediate integrals' generated by the MMD formalism. Advantage is taken of the dynamic storage allocation for tuning the length of the vector loops (i.e. the size of the vectorization buffer) as a function of (i) the total memory available for the job, (ii) the number of logical tasks defined by the user (≤13), and (iii) the storage requested by each specific class of integrals. Test calculations carried out on a CRAY-2 computer show that the average number of finite integrals computed over a (s, p, d, f) CGTO basis set is about 1180000 per second and per processor. The combination of vectorization and parallelism on this 4-processor machine reduces the CPU time by a factor larger than 20 with respect to the scalar and sequential performance. (orig.)

  17. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  18. A Nearest Neighbor Classifier Employing Critical Boundary Vectors for Efficient On-Chip Template Reduction.

    Science.gov (United States)

    Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi

    2016-05-01

    Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.

  19. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  20. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  1. Phytoplankton global mapping from space with a support vector machine algorithm

    Science.gov (United States)

    de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.

    2014-11-01

    In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data

  2. On the existence of polynomial Lyapunov functions for rationally stable vector fields

    DEFF Research Database (Denmark)

    Leth, Tobias; Wisniewski, Rafal; Sloth, Christoffer

    2018-01-01

    This paper proves the existence of polynomial Lyapunov functions for rationally stable vector fields. For practical purposes the existence of polynomial Lyapunov functions plays a significant role since polynomial Lyapunov functions can be found algorithmically. The paper extents an existing result...... on exponentially stable vector fields to the case of rational stability. For asymptotically stable vector fields a known counter example is investigated to exhibit the mechanisms responsible for the inability to extend the result further....

  3. The Key Role of the Vector Optimization Algorithm and Robust Design Approach for the Design of Polygeneration Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Gimelli

    2018-04-01

    Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.

  4. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  5. Support vector machines in analysis of top quark production

    International Nuclear Information System (INIS)

    Vaiciulis, A.

    2003-01-01

    The Support Vector Machine (SVM) learning algorithm is a new alternative to multivariate methods such as neural networks. Potential applications of SVMs in high energy physics include the common classification problem of signal/background discrimination as well as particle identification. A comparison of a conventional method and an SVM algorithm is presented here for the case of identifying top quark events in Run II physics at the CDF experiment

  6. Reconfigurable support vector machine classifier with approximate computing

    NARCIS (Netherlands)

    van Leussen, M.J.; Huisken, J.; Wang, L.; Jiao, H.; De Gyvez, J.P.

    2017-01-01

    Support Vector Machine (SVM) is one of the most popular machine learning algorithms. An energy-efficient SVM classifier is proposed in this paper, where approximate computing is utilized to reduce energy consumption and silicon area. A hardware architecture with reconfigurable kernels and

  7. Brian hears: online auditory processing using vectorization over channels.

    Science.gov (United States)

    Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain

    2011-01-01

    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

  8. Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance

    Science.gov (United States)

    Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi

    2017-11-01

    K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).

  9. A multistage motion vector processing method for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  10. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  11. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  12. Geometrical Modification of Learning Vector Quantization Method for Solving Classification Problems

    Directory of Open Access Journals (Sweden)

    Korhan GÜNEL

    2016-09-01

    Full Text Available In this paper, a geometrical scheme is presented to show how to overcome an encountered problem arising from the use of generalized delta learning rule within competitive learning model. It is introduced a theoretical methodology for describing the quantization of data via rotating prototype vectors on hyper-spheres.The proposed learning algorithm is tested and verified on different multidimensional datasets including a binary class dataset and two multiclass datasets from the UCI repository, and a multiclass dataset constructed by us. The proposed method is compared with some baseline learning vector quantization variants in literature for all domains. Large number of experiments verify the performance of our proposed algorithm with acceptable accuracy and macro f1 scores.

  13. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    Science.gov (United States)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  14. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1992-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  15. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)

  16. A new fast algorithm for the evaluation of regions of interest and statistical uncertainty in computed tomography

    International Nuclear Information System (INIS)

    Huesman, R.H.

    1984-01-01

    A new algorithm for region of interest evaluation in computed tomography is described. Region of interest evaluation is a technique used to improve quantitation of the tomographic imaging process by summing (or averaging) the reconstructed quantity throughout a volume of particular significance. An important application of this procedure arises in the analysis of dynamic emission computed tomographic data, in which the uptake and clearance of radiotracers are used to determine the blood flow and/or physiologica function of tissue within the significant volume. The new algorithm replaces the conventional technique of repeated image reconstructions with one in which projected regions are convolved and then used to form multiple vector inner products with the raw tomographic data sets. Quantitation of regions of interest is made without the need for reconstruction of tomographic images. The computational advantage of the new algorithm over conventional methods is between factors of 20 and of 500 for typical applications encountered in medical science studies. The greatest benefit is the ease with which the statistical uncertainty of the result is computed. The entire covariance matrix for the evaluation of regions of interest can be calculated with relatively few operations. (author)

  17. Forecasting of Power Grid Investment in China Based on Support Vector Machine Optimized by Differential Evolution Algorithm and Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available In recent years, the construction of China’s power grid has experienced rapid development, and its scale has leaped into the first place in the world. Accurate and effective prediction of power grid investment can not only help pool funds and rationally arrange investment in power grid construction, but also reduce capital costs and economic risks, which plays a crucial role in promoting power grid investment planning and construction process. In order to forecast the power grid investment of China accurately, firstly on the basis of analyzing the influencing factors of power grid investment, the influencing factors system for China’s power grid investment forecasting is constructed in this article. The method of grey relational analysis is used for screening the main influencing factors as the prediction model input. Then, a novel power grid investment prediction model based on DE-GWO-SVM (support vector machine optimized by differential evolution and grey wolf optimization algorithm is proposed. Next, two cases are taken for empirical analysis to prove that the DE-GWO-SVM model has strong generalization capacity and has achieved a good prediction effect for power grid investment forecasting in China. Finally, the DE-GWO-SVM model is adopted to forecast power grid investment in China from 2018 to 2022.

  18. Covariant Lyapunov vectors

    International Nuclear Information System (INIS)

    Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto

    2013-01-01

    Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)

  19. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  20. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

    Directory of Open Access Journals (Sweden)

    A. A. Zolotin

    2015-07-01

    Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

  1. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  2. THE REPLACEMENT-RENEWAL OF INDUSTRIAL EQUIPMENTS. THE MAPI FORMULAS

    Directory of Open Access Journals (Sweden)

    Meo Colombo Carlotta

    2010-07-01

    Full Text Available Since the production has been found to be an economical means for satisfying human wants, this process requires a complex industrial organization together with a large investment in equipments, plants and productive systems. These productive systems are employed to alter the physical environment and create consumer goods. As a result, they are consumed or become obsolete, inadequate, or otherwise candidates for replacement. When replacement is being considered, two assets must be evaluated: the present asset, the defender and its potential replacement, the challenger. Since the success of an industrial organization depends upon profit, replacement should generally occur if an economic advantage will result. Whatever the reason leading to the consideration of replacement, the analysis and decisions must be based upon estimates of what will occur in the future. In this paper we present the Mapi algorithm as a procedure for evaluating investments or for analyzing replacement opportunities.

  3. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  4. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  5. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

  6. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  7. Parallel field line and stream line tracing algorithms for space physics applications

    Science.gov (United States)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  8. Vector-Parallel processing of the successive overrelaxation method

    International Nuclear Information System (INIS)

    Yokokawa, Mitsuo

    1988-02-01

    Successive overrelaxation method, called SOR method, is one of iterative methods for solving linear system of equations, and it has been calculated in serial with a natural ordering in many nuclear codes. After the appearance of vector processors, this natural SOR method has been changed for the parallel algorithm such as hyperplane or red-black method, in which the calculation order is modified. These methods are suitable for vector processors, and more high-speed calculation can be obtained compared with the natural SOR method on vector processors. In this report, a new scheme named 4-colors SOR method is proposed. We find that the 4-colors SOR method can be executed on vector-parallel processors and it gives the most high-speed calculation among all SOR methods according to results of the vector-parallel execution on the Alliant FX/8 multiprocessor system. It is also shown that the theoretical optimal acceleration parameters are equal among five different ordering SOR methods, and the difference between convergence rates of these SOR methods are examined. (author)

  9. Efficient Multiplicative Updates for Support Vector Machines

    DEFF Research Database (Denmark)

    Potluru, Vamsi K.; Plis, Sergie N; Mørup, Morten

    2009-01-01

    (NMF) problem. This allows us to derive a novel multiplicative algorithm for solving hard and soft margin SVM. The algorithm follows as a natural extension of the updates for NMF and semi-NMF. No additional parameter setting, such as choosing learning rate, is required. Exploiting the connection......The dual formulation of the support vector machine (SVM) objective function is an instance of a nonnegative quadratic programming problem. We reformulate the SVM objective function as a matrix factorization problem which establishes a connection with the regularized nonnegative matrix factorization...... between SVM and NMF formulation, we show how NMF algorithms can be applied to the SVM problem. Multiplicative updates that we derive for SVM problem also represent novel updates for semi-NMF. Further this unified view yields algorithmic insights in both directions: we demonstrate that the Kernel Adatron...

  10. A vector modulated three-phase four-quadrant rectifier - Application to a dc motor drive

    Energy Technology Data Exchange (ETDEWEB)

    Jussila, Matti; Salo, Mika; Kaehkoenen, Lauri; Tuusa, Heikki

    2004-07-01

    This paper introduces a theory for a space vector modulation of a three-phase four-quadrant PWM rectifier (FQR). The presented vector modulation method is simple to realize with a microcontroller and it replaces the conventional modulation methods based on the analog technology. The FQR may be used to supply directly a dc load, e.g. a dc machine. The vector modulated FQR is tested in simulations supplying a 4.5 kW dc motor. The simulations show the benefits of the vector modulated FQR against thyristor converters: the supply currents are sinusoidal and the displacement power factor of the supply can be controlled. Furthermore the load current is smooth. (author)

  11. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  12. The combination of a histogram-based clustering algorithm and support vector machine for the diagnosis of osteoporosis

    International Nuclear Information System (INIS)

    Heo, Min Suk; Kavitha, Muthu Subash; Asano, Akira; Taguchi, Akira

    2013-01-01

    To prevent low bone mineral density (BMD), that is, osteoporosis, in postmenopausal women, it is essential to diagnose osteoporosis more precisely. This study presented an automatic approach utilizing a histogram-based automatic clustering (HAC) algorithm with a support vector machine (SVM) to analyse dental panoramic radiographs (DPRs) and thus improve diagnostic accuracy by identifying postmenopausal women with low BMD or osteoporosis. We integrated our newly-proposed histogram-based automatic clustering (HAC) algorithm with our previously-designed computer-aided diagnosis system. The extracted moment-based features (mean, variance, skewness, and kurtosis) of the mandibular cortical width for the radial basis function (RBF) SVM classifier were employed. We also compared the diagnostic efficacy of the SVM model with the back propagation (BP) neural network model. In this study, DPRs and BMD measurements of 100 postmenopausal women patients (aged >50 years), with no previous record of osteoporosis, were randomly selected for inclusion. The accuracy, sensitivity, and specificity of the BMD measurements using our HAC-SVM model to identify women with low BMD were 93.0% (88.0%-98.0%), 95.8% (91.9%-99.7%) and 86.6% (79.9%-93.3%), respectively, at the lumbar spine; and 89.0% (82.9%-95.1%), 96.0% (92.2%-99.8%) and 84.0% (76.8%-91.2%), respectively, at the femoral neck. Our experimental results predict that the proposed HAC-SVM model combination applied on DPRs could be useful to assist dentists in early diagnosis and help to reduce the morbidity and mortality associated with low BMD and osteoporosis.

  13. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  14. Null vectors in superconformal quantum field theory

    International Nuclear Information System (INIS)

    Huang Chaoshang

    1993-01-01

    The superspace formulation of the N=1 superconformal field theory and superconformal Ward identities are used to give a precise definition of fusion. Using the fusion procedure, superconformally covariant differential equations are derived and consequently a complete and straightforward algorithm for finding null vectors in Verma modules of the Neveu-Schwarz algebra is given. (orig.)

  15. Document Organization Using Kohonen's Algorithm.

    Science.gov (United States)

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  16. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in neutron transport problems, the authors briefly describe the work done in order to get a vector code. Vectorization principles are presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples are presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example they propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion they prove that Monte Carlo algorithms are very well suited to future vector and parallel computers

  17. Desingularization strategies for three-dimensional vector fields

    CERN Document Server

    Torres, Felipe Cano

    1987-01-01

    For a vector field #3, where Ai are series in X, the algebraic multiplicity measures the singularity at the origin. In this research monograph several strategies are given to make the algebraic multiplicity of a three-dimensional vector field decrease, by means of permissible blowing-ups of the ambient space, i.e. transformations of the type xi=x'ix1, 2s. A logarithmic point of view is taken, marking the exceptional divisor of each blowing-up and by considering only the vector fields which are tangent to this divisor, instead of the whole tangent sheaf. The first part of the book is devoted to the logarithmic background and to the permissible blowing-ups. The main part corresponds to the control of the algorithms for the desingularization strategies by means of numerical invariants inspired by Hironaka's characteristic polygon. Only basic knowledge of local algebra and algebraic geometry is assumed of the reader. The pathologies we find in the reduction of vector fields are analogous to pathologies in the pro...

  18. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  19. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  20. Evaluation of the impacts of climate change on disease vectors through ecological niche modelling.

    Science.gov (United States)

    Carvalho, B M; Rangel, E F; Vale, M M

    2017-08-01

    Vector-borne diseases are exceptionally sensitive to climate change. Predicting vector occurrence in specific regions is a challenge that disease control programs must meet in order to plan and execute control interventions and climate change adaptation measures. Recently, an increasing number of scientific articles have applied ecological niche modelling (ENM) to study medically important insects and ticks. With a myriad of available methods, it is challenging to interpret their results. Here we review the future projections of disease vectors produced by ENM, and assess their trends and limitations. Tropical regions are currently occupied by many vector species; but future projections indicate poleward expansions of suitable climates for their occurrence and, therefore, entomological surveillance must be continuously done in areas projected to become suitable. The most commonly applied methods were the maximum entropy algorithm, generalized linear models, the genetic algorithm for rule set prediction, and discriminant analysis. Lack of consideration of the full-known current distribution of the target species on models with future projections has led to questionable predictions. We conclude that there is no ideal 'gold standard' method to model vector distributions; researchers are encouraged to test different methods for the same data. Such practice is becoming common in the field of ENM, but still lags behind in studies of disease vectors.

  1. A Biometric Face Recognition System Using an Algorithm Based on the Principal Component Analysis Technique

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-06-01

    Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.

  2. A nowcasting technique based on application of the particle filter blending algorithm

    Science.gov (United States)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  3. Pattern recognition with vector hits

    International Nuclear Information System (INIS)

    Frühwirth, R

    2012-01-01

    Trackers at the future high-luminosity LHC, designed to have triggering capability, will feature layers of stacked modules with a small stack separation. This will allow the reconstruction of track stubs or vector hits with position and direction information, but lacking precise curvature information. This opens up new possibilities for track finding, online and offline. Two track finding methods, the Kalman filter and the convergent Hough transform are studied in this context. Results from a simplified fast simulation are presented. It is shown that the performance of the methods depends to a large extent on the size of the stack separation. We conclude that the detector design and the choice of the track finding algorithm(s) are strongly coupled and should proceed conjointly.

  4. Vector control of three-phase AC/DC front-end converter

    Indian Academy of Sciences (India)

    directional power flow capability. A design procedure for selection of control parameters is discussed. A simple algorithm for unit-vector generation is presented. Starting current transients are studied with particular emphasis on high-power ...

  5. A Placement Algorithm for Capital Items that Depreciate with Time

    International Nuclear Information System (INIS)

    Wweru, R.M

    1999-01-01

    The replacement algorithm is centred on the prediction of the replacement cost and the determination of the most economical replacement policy. For items whose efficiency depreciates over their life spans e.g. machine tools, vehicles et.c; the prediction of costs involves those factors which contribute to increase operating cost, forced idle time, increase scrap, increased repair cost etc. The alternative to increased cost of operating an aging equipment is the cost of replacing the old equipment with a new one. There is some age at which the replacement of the old equipment is more economical than continuation (of the old one) at the increased operating cost (Johnson R D, Siskin B R, 1989). This algorithm uses certain cost relationships that are vital in minimization of total costs and is focused on capital equipment that depreciates with time as opposed to items with a probabilistic life span

  6. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    Directory of Open Access Journals (Sweden)

    Ortega Antonio

    2005-01-01

    Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  7. Evaluating automatically parallelized versions of the support vector machine

    NARCIS (Netherlands)

    Codreanu, Valeriu; Droge, Bob; Williams, David; Yasar, Burhan; Yang, Fo; Liu, Baoquan; Dong, Feng; Surinta, Olarik; Schomaker, Lambertus; Roerdink, Jos; Wiering, Marco

    2014-01-01

    The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the

  8. Evaluating automatically parallelized versions of the support vector machine

    NARCIS (Netherlands)

    Codreanu, V.; Dröge, B.; Williams, D.; Yasar, B.; Yang, P.; Liu, B.; Dong, F.; Surinta, O.; Schomaker, L.R.B.; Roerdink, J.B.T.M.; Wiering, M.A.

    2016-01-01

    The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the

  9. Time Series Analysis and Forecasting for Wind Speeds Using Support Vector Regression Coupled with Artificial Intelligent Algorithms

    Directory of Open Access Journals (Sweden)

    Ping Jiang

    2015-01-01

    Full Text Available Wind speed/power has received increasing attention around the earth due to its renewable nature as well as environmental friendliness. With the global installed wind power capacity rapidly increasing, wind industry is growing into a large-scale business. Reliable short-term wind speed forecasts play a practical and crucial role in wind energy conversion systems, such as the dynamic control of wind turbines and power system scheduling. In this paper, an intelligent hybrid model for short-term wind speed prediction is examined; the model is based on cross correlation (CC analysis and a support vector regression (SVR model that is coupled with brainstorm optimization (BSO and cuckoo search (CS algorithms, which are successfully utilized for parameter determination. The proposed hybrid models were used to forecast short-term wind speeds collected from four wind turbines located on a wind farm in China. The forecasting results demonstrate that the intelligent hybrid models outperform single models for short-term wind speed forecasting, which mainly results from the superiority of BSO and CS for parameter optimization.

  10. T-wave end detection using neural networks and Support Vector Machines.

    Science.gov (United States)

    Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román

    2018-05-01

    In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  12. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K.; Siegel, Andrew R.

    2017-04-16

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.

  13. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  14. Vectors expressing chimeric Japanese encephalitis dengue 2 viruses.

    Science.gov (United States)

    Wei, Y; Wang, S; Wang, X

    2014-01-01

    Vectors based on self-replicating RNAs (replicons) of flaviviruses are becoming powerful tool for expression of heterologous genes in mammalian cells and development of novel antiviral and anticancer vaccines. We constructed two vectors expressing chimeric viruses consisting of attenuated SA14-14-2 strain of Japanese encephalitis virus (JEV) in which the PrM/M-E genes were replaced fully or partially with those of dengue 2 virus (DENV-2). These vectors, named pJED2 and pJED2-1770 were transfected to BHK-21 cells and produced chimeric viruses JED2V and JED2-1770V, respectively. The chimeric viruses could be passaged in C6/36 but not BHK-21 cells. The chimeric viruses produced in C6/36 cells CPE 4-5 days after infection and RT-PCR, sequencing, immunofluorescence assay (IFA) and Western blot analysis confirmed the chimeric nature of produced viruses. The immunogenicity of chimeric viruses in mice was proved by detecting DENV-2 E protein-specific serum IgG antibodies with neutralization titer of 10. Successful preparation of infectious clones of chimeric JEV-DENV-2 viruses showed that JEV-based expression vectors are fully functional.

  15. Amino acid "little Big Bang": representing amino acid substitution matrices as dot products of Euclidian vectors.

    Science.gov (United States)

    Zimmermann, Karel; Gibrat, Jean-François

    2010-01-04

    Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  16. Amino acid "little Big Bang": Representing amino acid substitution matrices as dot products of Euclidian vectors

    Directory of Open Access Journals (Sweden)

    Zimmermann Karel

    2010-01-01

    Full Text Available Abstract Background Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. Results We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. Conclusions This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  17. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

    2011-01-01

    The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality

  18. A vectorized Poisson solver over a spherical shell and its application to the quasi-geostrophic omega-equation

    Science.gov (United States)

    Mullenmeister, Paul

    1988-01-01

    The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.

  19. Dynamic optimization of maintenance and improvement planning for water main system: Periodic replacement approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Woo; Choi, Go Bong; Lee, Jong Min [Seoul National University, Seoul (Korea, Republic of); Suh, Jung Chul [Samchully Corporation, Seoul (Korea, Republic of)

    2016-01-15

    This paper proposes a Markov decision process (MDP) based approach to derive an optimal schedule of maintenance, rehabilitation and replacement of the water main system. The scheduling problem utilizes auxiliary information of a pipe such as the current state, cost, and deterioration model. The objective function and detailed algorithm of dynamic programming are modified to solve the periodic replacement problem. The optimal policy evaluated by the proposed algorithm is compared to several existing policies via Monte Carlo simulations. The proposed decision framework provides a systematic way to obtain an optimal policy.

  20. Iris recognition using image moments and k-means algorithm.

    Science.gov (United States)

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  1. 3D Model Retrieval Based on Vector Quantisation Index Histograms

    International Nuclear Information System (INIS)

    Lu, Z M; Luo, H; Pan, J S

    2006-01-01

    This paper proposes a novel technique to retrieval 3D mesh models using vector quantisation index histograms. Firstly, points are sampled uniformly on mesh surface. Secondly, to a point five features representing global and local properties are extracted. Thus feature vectors of points are obtained. Third, we select several models from each class, and employ their feature vectors as a training set. After training using LBG algorithm, a public codebook is constructed. Next, codeword index histograms of the query model and those in database are computed. The last step is to compute the distance between histograms of the query and those of the models in database. Experimental results show the effectiveness of our method

  2. Support vector machine for diagnosis cancer disease: A comparative study

    Directory of Open Access Journals (Sweden)

    Nasser H. Sweilam

    2010-12-01

    Full Text Available Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, Quantum-behave Particle Swarm for training SVM is introduced. Another approach named least square support vector machine (LSSVM and active set strategy are introduced. The obtained results by these methods are tested on a breast cancer dataset and compared with the exact solution model problem.

  3. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  4. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    Science.gov (United States)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  5. Curved manifolds with conserved Runge-Lenz vectors

    International Nuclear Information System (INIS)

    Ngome, J.-P.

    2009-01-01

    van Holten's algorithm is used to construct Runge-Lenz-type conserved quantities, induced by Killing tensors, on curved manifolds. For the generalized Taub-Newman-Unti-Tamburino metric, the most general external potential such that the combined system admits a conserved Runge-Lenz-type vector is found. In the multicenter case, the subclass of two-center metric exhibits a conserved Runge-Lenz-type scalar.

  6. Infinite ensemble of support vector machines for prediction of ...

    African Journals Online (AJOL)

    Many researchers have demonstrated the use of artificial neural networks (ANNs) to predict musculoskeletal disorders risk associated with occupational exposures. In order to improve the accuracy of LBDs risk classification, this paper proposes to use the support vector machines (SVMs), a machine learning algorithm used ...

  7. Accurate Prediction of Coronary Artery Disease Using Bioinformatics Algorithms

    Directory of Open Access Journals (Sweden)

    Hajar Shafiee

    2016-06-01

    Full Text Available Background and Objectives: Cardiovascular disease is one of the main causes of death in developed and Third World countries. According to the statement of the World Health Organization, it is predicted that death due to heart disease will rise to 23 million by 2030. According to the latest statistics reported by Iran’s Minister of health, 3.39% of all deaths are attributed to cardiovascular diseases and 19.5% are related to myocardial infarction. The aim of this study was to predict coronary artery disease using data mining algorithms. Methods: In this study, various bioinformatics algorithms, such as decision trees, neural networks, support vector machines, clustering, etc., were used to predict coronary heart disease. The data used in this study was taken from several valid databases (including 14 data. Results: In this research, data mining techniques can be effectively used to diagnose different diseases, including coronary artery disease. Also, for the first time, a prediction system based on support vector machine with the best possible accuracy was introduced. Conclusion: The results showed that among the features, thallium scan variable is the most important feature in the diagnosis of heart disease. Designation of machine prediction models, such as support vector machine learning algorithm can differentiate between sick and healthy individuals with 100% accuracy.

  8. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  9. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers. (orig.)

  10. A Linear Time Algorithm for the k Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  11. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  12. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  13. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    Science.gov (United States)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  14. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  15. Support vector machine: a tool for mapping mineral prospectivity

    NARCIS (Netherlands)

    Zuo, R.; Carranza, E.J.M

    2011-01-01

    In this contribution, we describe an application of support vector machine (SVM), a supervised learning algorithm, to mineral prospectivity mapping. The free R package e1071 is used to construct a SVM with sigmoid kernel function to map prospectivity for Au deposits in western Meguma Terrain of Nova

  16. Metrics for vector quantization-based parametric speech enhancement and separation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    Speech enhancement and separation algorithms sometimes employ a two-stage processing scheme, wherein the signal is first mapped to an intermediate low-dimensional parametric description after which the parameters are mapped to vectors in codebooks trained on, for exam- ple, individual noise...

  17. An efficient hybrid evolutionary algorithm based on PSO and HBMO algorithms for multi-objective Distribution Feeder Reconfiguration

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Engineering Department, Shiraz University of Technology, Shiraz (Iran)

    2009-08-15

    This paper introduces a robust searching hybrid evolutionary algorithm to solve the multi-objective Distribution Feeder Reconfiguration (DFR). The main objective of the DFR is to minimize the real power loss, deviation of the nodes' voltage, the number of switching operations, and balance the loads on the feeders. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. This paper presents a new approach based on norm3 for the DFR problem. In the proposed method, the objective functions are considered as a vector and the aim is to maximize the distance (norm2) between the objective function vector and the worst objective function vector while the constraints are met. Since the proposed DFR is a multi objective and non-differentiable optimization problem, a new hybrid evolutionary algorithm (EA) based on the combination of the Honey Bee Mating Optimization (HBMO) and the Discrete Particle Swarm Optimization (DPSO), called DPSO-HBMO, is implied to solve it. The results of the proposed reconfiguration method are compared with the solutions obtained by other approaches, the original DPSO and HBMO over different distribution test systems. (author)

  18. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

    Science.gov (United States)

    In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

  19. Supercomputer implementation of finite element algorithms for high speed compressible flows. Progress report, period ending 30 June 1986

    International Nuclear Information System (INIS)

    Thornton, E.A.; Ramakrishnan, R.

    1986-06-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes

  20. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  1. Parallel Kalman filter track fit based on vector classes

    Energy Technology Data Exchange (ETDEWEB)

    Kisel, Ivan [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Kretz, Matthias [Kirchhoff-Institut fuer Physik, Ruprecht-Karls Universitaet, Heidelberg (Germany); Kulakov, Igor [Goethe-Universitaet, Frankfurt am Main (Germany); National Taras Shevchenko University, Kyiv (Ukraine)

    2010-07-01

    Modern high energy physics experiments have to process terabytes of input data produced in particle collisions. The core of the data reconstruction in high energy physics is the Kalman filter. Therefore, developing the fast Kalman filter algorithm, which uses maximum available power of modern processors, is important, in particular for initial selection of events interesting for the new physics. One of processors features, which can speed up the algorithm, is a SIMD instruction set, which allows to pack several data items in one register and operate on all of them in one go, thus achieving more operations per clock cycle. Therefore a flexible and useful interface, which uses the SIMD instruction set on different CPU and GPU processors architectures, has been realized as a vector classes library. The Kalman filter based track fitting algorithm has been implemented with use of the vector classes. Fitting quality tests show good results with the residuals equal to 49 {mu}m and 44 {mu}m for x and y track parameters and relative momentum resolution of 0.7%. The fitting time of 0.053 {mu}s per track has been achieved on Intel Xeon X5550 with 8 cores at 2.6 GHz by using in addition Intel Threading Building Blocks.

  2. Support Vector Machines: Relevance Feedback and Information Retrieval.

    Science.gov (United States)

    Drucker, Harris; Shahrary, Behzad; Gibbon, David C.

    2002-01-01

    Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in information retrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…

  3. Mineral Replacement Reactions as a Precursor to Strain Localisation: an (HR-)EBSD approach

    Science.gov (United States)

    Gardner, J.; Wheeler, J.; Wallis, D.; Hansen, L. N.; Mariani, E.

    2017-12-01

    Much remains to be learned about the links between metamorphism and deformation. Our work investigates the behaviour of fluid-mediated mineral replacement reaction products when exposed to subsequent shear stresses. We focus on albite from a metagabbro that has experienced metamorphism and subsequent deformation at greenschist facies, resulting in a reduction in grain size and associated strain localisation. EBSD maps show that prior to grain size reduction, product grains are highly distorted, yet they formed, and subsequently deformed, at temperatures at which extensive dislocation creep is unlikely. The Weighted Burgers Vector can be used to quantitatively describe the types of Burgers vectors present in geometrically necessary dislocation (GND) populations derived from 2-D EBSD map data. Application of this technique to the distorted product grains reveals the prominence of, among others, dislocations with apparent [010] Burgers vectors. This supports (with some caveats) the idea that dislocation creep is not responsible for the observed lattice distortion, as there are no known slip systems in plagioclase with a [010] Burgers vector. Distortion in a replacement microstructure has also been attributed to the presence of nanoscale product grains, which share very similar, but not identical, orientations due to topotactic nucleation from adjacent sites on the same substrate. As a precipitate, the product grains should be expected to be largely free of elastic strain. However, high angular resolution EBSD results demonstrate that product grains contain both elastic strains (> 10-3) and residual stresses (several hundred MPa), as well as GND densities on the order of 1014-1015 m-2. Thus we suggest the observed distortion (elastic strain plus rotations) in the lattice is produced during the mineral replacement reaction by a lattice mismatch and volume change between parent and product. Stored strain energy then provides a driving force for recovery and

  4. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  5. Vectorization of nuclear codes on FACOM 230-75 APU computer

    International Nuclear Information System (INIS)

    Harada, Hiroo; Higuchi, Kenji; Ishiguro, Misako; Tsutsui, Tsuneo; Fujii, Minoru

    1983-02-01

    To provide for the future usage of supercomputer, we have investigated the vector processing efficiency of nuclear codes which are being used at JAERI. The investigation is performed by using FACOM 230-75 APU computer. The codes are CITATION (3D neutron diffusion), SAP5 (structural analysis), CASCMARL (irradiation damage simulation). FEM-BABEL (3D neutron diffusion by FEM), GMSCOPE (microscope simulation). DWBA (cross section calculation at molecular collisions). A new type of cell density calculation for particle-in-cell method is also investigated. For each code we have obtained a significant speedup which ranges from 1.8 (CASCMARL) to 7.5 (GMSCOPE), respectively. We have described in this report the running time dynamic profile analysis of the codes, numerical algorithms used, program restructuring for the vectorization, numerical experiments of the iterative process, vectorized ratios, speedup ratios on the FACOM 230-75 APU computer, and some vectorization views. (author)

  6. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    Science.gov (United States)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  7. Counting Subspaces of a Finite Vector Space – 1

    Indian Academy of Sciences (India)

    ply refer the reader to [2], where an exposition of Gauss's proof (among .... obtained. The above process can be easily reversed: let e1;:::;ek denote the k coordinate vectors in Fn, written as col- umns. Starting with a Ferrers diagram ¸ in a k×(n−k) grid, replace ... consists of n segments of unit length, of which k are vertical and ...

  8. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    International Nuclear Information System (INIS)

    Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Novak, M; Wenzel, S; Bandieramonte, M; Bitzes, G; Canal, P; Elvira, V D; Jun, S Y; Lima, G; Licht, J C De Fine; Duhem, L; Sehgal, R; Shadura, O

    2015-01-01

    The GeantV project is focused on the R and D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results. (paper)

  9. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  10. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  11. Elements of mathematics topological vector spaces

    CERN Document Server

    Bourbaki, Nicolas

    2003-01-01

    This is a softcover reprint of the English translation of 1987 of the second edition of Bourbaki's Espaces Vectoriels Topologiques (1981). This second edition is a brand new book and completely supersedes the original version of nearly 30 years ago. But a lot of the material has been rearranged, rewritten, or replaced by a more up-to-date exposition, and a good deal of new material has been incorporated in this book, all reflecting the progress made in the field during the last three decades. Table of Contents. Chapter I: Topological vector spaces over a valued field. Chapter II: Convex sets and locally convex spaces. Chapter III: Spaces of continuous linear mappings. Chapter IV: Duality in topological vector spaces. Chapter V: Hilbert spaces (elementary theory). Finally, there are the usual "historical note", bibliography, index of notation, index of terminology, and a list of some important properties of Banach spaces. (Based on Math Reviews, 1983).

  12. A new hybrid imperialist competitive algorithm on data clustering

    Indian Academy of Sciences (India)

    Modified imperialist competitive algorithm; simulated annealing; ... Clustering is one of the unsupervised learning branches where a set of patterns, usually vectors ..... machine classification is based on design, operation, and/or purpose.

  13. Multi-kilobase homozygous targeted gene replacement in human induced pluripotent stem cells.

    Science.gov (United States)

    Byrne, Susan M; Ortiz, Luis; Mali, Prashant; Aach, John; Church, George M

    2015-02-18

    Sequence-specific nucleases such as TALEN and the CRISPR/Cas9 system have so far been used to disrupt, correct or insert transgenes at precise locations in mammalian genomes. We demonstrate efficient 'knock-in' targeted replacement of multi-kilobase genes in human induced pluripotent stem cells (iPSC). Using a model system replacing endogenous human genes with their mouse counterpart, we performed a comprehensive study of targeting vector design parameters for homologous recombination. A 2.7 kilobase (kb) homozygous gene replacement was achieved in up to 11% of iPSC without selection. The optimal homology arm length was around 2 kb, with homology length being especially critical on the arm not adjacent to the cut site. Homologous sequence inside the cut sites was detrimental to targeting efficiency, consistent with a synthesis-dependent strand annealing (SDSA) mechanism. Using two nuclease sites, we observed a high degree of gene excisions and inversions, which sometimes occurred more frequently than indel mutations. While homozygous deletions of 86 kb were achieved with up to 8% frequency, deletion frequencies were not solely a function of nuclease activity and deletion size. Our results analyzing the optimal parameters for targeting vector design will inform future gene targeting efforts involving multi-kilobase gene segments, particularly in human iPSC. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Involutive distributions of operator-valued evolutionary vector fields and their affine geometry

    NARCIS (Netherlands)

    Kiselev, A.V.; van de Leur, J.W.

    2010-01-01

    We generalize the notion of a Lie algebroid over infinite jet bundle by replacing the variational anchor with an N-tuple of differential operators whose images in the Lie algebra of evolutionary vector fields of the jet space are subject to collective commutation closure. The linear space of such

  15. Numerical solution of integral equations, describing mass spectrum of vector mesons

    International Nuclear Information System (INIS)

    Zhidkov, E.P.; Nikonov, E.G.; Sidorov, A.V.; Skachkov, N.B.; Khoromskij, B.N.

    1988-01-01

    The description of the numerical algorithm for solving quasipotential integral equation in impulse space is presented. The results of numerical computations of the vector meson mass spectrum and the leptonic decay width are given in comparison with the experimental data

  16. Testing the statistical isotropy of large scale structure with multipole vectors

    International Nuclear Information System (INIS)

    Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.

    2011-01-01

    A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.

  17. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  18. Classification Formula and Generation Algorithm of Cycle Decomposition Expression for Dihedral Groups

    Directory of Open Access Journals (Sweden)

    Dakun Zhang

    2013-01-01

    Full Text Available The necessary of classification research on common formula of group (dihedral group cycle decomposition expression is illustrated. It includes the reflection and rotation conversion, which derived six common formulae on cycle decomposition expressions of group; it designed the generation algorithm on the cycle decomposition expressions of group, which is based on the method of replacement conversion and the classification formula; algorithm analysis and the results of the process show that the generation algorithm which is based on the classification formula is outperformed by the general algorithm which is based on replacement conversion; it has great significance to solve the enumeration of the necklace combinational scheme, especially the structural problems of combinational scheme, by using group theory and computer.

  19. Aging Detection of Electrical Point Machines Based on Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Jaewon Sa

    2017-11-01

    Full Text Available Electrical point machines (EPM must be replaced at an appropriate time to prevent the occurrence of operational safety or stability problems in trains resulting from aging or budget constraints. However, it is difficult to replace EPMs effectively because the aging conditions of EPMs depend on the operating environments, and thus, a guideline is typically not be suitable for replacing EPMs at the most timely moment. In this study, we propose a method of classification for the detection of an aging effect to facilitate the timely replacement of EPMs. We employ support vector data description to segregate data of “aged” and “not-yet-aged” equipment by analyzing the subtle differences in normalized electrical signals resulting from aging. Based on the before and after-replacement data that was obtained from experimental studies that were conducted on EPMs, we confirmed that the proposed method was capable of classifying machines based on exhibited aging effects with adequate accuracy.

  20. Lost-in-Space Star Identification Using Planar Triangle Principal Component Analysis Algorithm

    Directory of Open Access Journals (Sweden)

    Fuqiang Zhou

    2015-01-01

    Full Text Available It is a challenging task for a star sensor to implement star identification and determine the attitude of a spacecraft in the lost-in-space mode. Several algorithms based on triangle method are proposed for star identification in this mode. However, these methods hold great time consumption and large guide star catalog memory size. The star identification performance of these methods requires improvements. To address these problems, a star identification algorithm using planar triangle principal component analysis is presented here. A star pattern is generated based on the planar triangle created by stars within the field of view of a star sensor and the projection of the triangle. Since a projection can determine an index for a unique triangle in the catalog, the adoption of the k-vector range search technique makes this algorithm very fast. In addition, a sharing star validation method is constructed to verify the identification results. Simulation results show that the proposed algorithm is more robust than the planar triangle and P-vector algorithms under the same conditions.

  1. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  2. Marginalized particle filter for spacecraft attitude estimation from vector measurements

    Institute of Scientific and Technical Information of China (English)

    Yaqiu LIU; Xueyuan JIANG; Guangfu MA

    2007-01-01

    An algorithm based on the marginalized particle filters(MPF)is given in details in this paper to solve the spacecraft attitude estimation problem:attitude and gyro bias estimation using the biased gyro and vector observations.In this algorithm,by marginalizing out the state appearing linearly in the spacecraft model,the Kalman filter is associated with each particle in order to reduce the size of the state space and computational burden.The distribution of attitude vector is approximated by a set of particles and estimated using particle filter,while the estimation of gyro bias is obtained for each one of the attitude particles by applying the Kalman filter.The efficiency of this modified MPF estimator is verified through numerical simulation of a fully actuated rigid body.For comparison,unscented Kalman filter(UKF)is also used to gauge the performance of MPF.The results presented in this paper clearly demonstrate that the MPF is superior to UKF in coping with the nonlinear model.

  3. Solving large sets of coupled equations iteratively by vector processing on the CYBER 205 computer

    International Nuclear Information System (INIS)

    Tolsma, L.D.

    1985-01-01

    The set of coupled linear second-order differential equations which has to be solved for the quantum-mechanical description of inelastic scattering of atomic and nuclear particles can be rewritten as an equivalent set of coupled integral equations. When some type of functions is used as piecewise analytic reference solutions, the integrals that arise in this set can be evaluated analytically. The set of integral equations can be solved iteratively. For the results mentioned an inward-outward iteration scheme has been applied. A concept of vectorization of coupled-channel Fortran programs, based on this integral method, is presented for the use on the Cyber 205 computer. It turns out that, for two heavy ion nuclear scattering test cases, this vector algorithm gives an overall speed-up of about a factor of 2 to 3 compared to a highly optimized scalar algorithm for a one vector pipeline computer

  4. Development of precursors recognition methods in vector signals

    Science.gov (United States)

    Kapralov, V. G.; Elagin, V. V.; Kaveeva, E. G.; Stankevich, L. A.; Dremin, M. M.; Krylov, S. V.; Borovov, A. E.; Harfush, H. A.; Sedov, K. S.

    2017-10-01

    Precursor recognition methods in vector signals of plasma diagnostics are presented. Their requirements and possible options for their development are considered. In particular, the variants of using symbolic regression for building a plasma disruption prediction system are discussed. The initial data preparation using correlation analysis and symbolic regression is discussed. Special attention is paid to the possibility of using algorithms in real time.

  5. An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-12-01

    Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.

  6. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  7. Remote sensing estimates of stand-replacement fires in Russia, 2002–2011

    International Nuclear Information System (INIS)

    Krylov, Alexander; Potapov, Peter; Loboda, Tatiana; Tyukavina, Alexandra; Turubanova, Svetlana; Hansen, Matthew C; McCarty, Jessica L

    2014-01-01

    The presented study quantifies the proportion of stand-replacement fires in Russian forests through the integrated analysis of Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS) data products. We employed 30 m Landsat Enhanced Thematic Mapper Plus derived tree canopy cover and decadal (2001–2012) forest cover loss (Hansen et al 2013 High-resolution global maps of 21st-century forest cover change Science 342 850–53) to identify forest extent and disturbance. These data were overlaid with 1 km MODIS active fire (earthdata.nasa.gov/data/near-real-time-data/firms) and 500 m regional burned area data (Loboda et al 2007 Regionally adaptable dNBR-based algorithm for burned area mapping from MODIS data Remote Sens. Environ. 109 429–42 and Loboda et al 2011 Mapping burned area in Alaska using MODIS data: a data limitations-driven modification to the regional burned area algorithm Int. J. Wildl. Fire 20 487–96) to differentiate stand-replacement disturbances due to fire versus other causes. Total stand replacement forest fire area within the Russian Federation from 2002 to 2011 was estimated to be 17.6 million ha (Mha). The smallest stand-replacement fire loss occurred in 2004 (0.4 Mha) and the largest annual loss in 2003 (3.3 Mha). Of total burned area within forests, 33.6% resulted in stand-replacement. Light conifer stands comprised 65% of all non-stand-replacement and 79% of all stand-replacement fire in Russia. Stand-replacement area for the study period is estimated to be two times higher than the reported logging area. Results of this analysis can be used with historical fire regime estimations to develop effective fire management policy, increase accuracy of carbon calculations, and improve fire behavior and climate change modeling efforts. (paper)

  8. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    Science.gov (United States)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  9. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  10. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  11. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    International Nuclear Information System (INIS)

    Sun Li-Sha; Kang Xiao-Yun; Zhang Qiong; Lin Lan-Xin

    2011-01-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems. (general)

  12. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    Science.gov (United States)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  13. Limits on the efficiency of event-based algorithms for Monte Carlo neutron transport

    Directory of Open Access Journals (Sweden)

    Paul K. Romano

    2017-09-01

    Full Text Available The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, the vector speedup is also limited by differences in the execution time for events being carried out in a single event-iteration.

  14. Contact replacement for NMR resonance assignment.

    Science.gov (United States)

    Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris

    2008-07-01

    Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.

  15. Efficient immunoglobulin gene disruption and targeted replacement in rabbit using zinc finger nucleases.

    Directory of Open Access Journals (Sweden)

    Tatiana Flisikowska

    Full Text Available Rabbits are widely used in biomedical research, yet techniques for their precise genetic modification are lacking. We demonstrate that zinc finger nucleases (ZFNs introduced into fertilized oocytes can inactivate a chosen gene by mutagenesis and also mediate precise homologous recombination with a DNA gene-targeting vector to achieve the first gene knockout and targeted sequence replacement in rabbits. Two ZFN pairs were designed that target the rabbit immunoglobulin M (IgM locus within exons 1 and 2. ZFN mRNAs were microinjected into pronuclear stage fertilized oocytes. Founder animals carrying distinct mutated IgM alleles were identified and bred to produce offspring. Functional knockout of the immunoglobulin heavy chain locus was confirmed by serum IgM and IgG deficiency and lack of IgM(+ and IgG(+ B lymphocytes. We then tested whether ZFN expression would enable efficient targeted sequence replacement in rabbit oocytes. ZFN mRNA was co-injected with a linear DNA vector designed to replace exon 1 of the IgM locus with ∼1.9 kb of novel sequence. Double strand break induced targeted replacement occurred in up to 17% of embryos and in 18% of fetuses analyzed. Two major goals have been achieved. First, inactivation of the endogenous IgM locus, which is an essential step for the production of therapeutic human polyclonal antibodies in the rabbit. Second, establishing efficient targeted gene manipulation and homologous recombination in a refractory animal species. ZFN mediated genetic engineering in the rabbit and other mammals opens new avenues of experimentation in immunology and many other research fields.

  16. Variation in efficiency of parallel algorithms. [for study of stiffness matrices in planar trusses

    Science.gov (United States)

    Hayashi, A.; Melosh, R. J.; Utku, S.; Salama, M.

    1985-01-01

    The present study has the objective to investigate some iterative parallel-processor linear equation solving algorithms with respect to efficiency for analyses of typical linear engineering systems. Attention is given to a set of n linear equations, Ku = p, where K = an n x n positive definite, sparsely populated, symmetric matrix, u = an n x 1 vector of unknown responses, and p = an n x 1 vector of prescribed constants. This study is concerned with a hybrid method in which iteration is used to solve the problem, while a direct method is used on the local processor level. Variations in the efficiency of parallel algorithms are explored. Measures of the efficiency are based on computer experiments regarding the algorithms. For all the algorithms, the wall clock time is found to decrease as the number of processors increases.

  17. Distribution agnostic structured sparsity recovery algorithms

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2013-05-01

    We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm finds Bayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian Matching Pursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature of the algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensions to this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate the related unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too at low-computational expense. © 2013 IEEE.

  18. Resolving the 180-degree ambiguity in vector magnetic field measurements: The 'minimum' energy solution

    Science.gov (United States)

    Metcalf, Thomas R.

    1994-01-01

    I present a robust algorithm that resolves the 180-deg ambiguity in measurements of the solar vector magnetic field. The technique simultaneously minimizes both the divergence of the magnetic field and the electric current density using a simulated annealing algorithm. This results in the field orientation with approximately minimum free energy. The technique is well-founded physically and is simple to implement.

  19. Application of Improved APO Algorithm in Vulnerability Assessment and Reconstruction of Microgrid

    Science.gov (United States)

    Xie, Jili; Ma, Hailing

    2018-01-01

    Artificial Physics Optimization (APO) has good global search ability and can avoid the premature convergence phenomenon in PSO algorithm, which has good stability of fast convergence and robustness. On the basis of APO of the vector model, a reactive power optimization algorithm based on improved APO algorithm is proposed for the static structure and dynamic operation characteristics of microgrid. The simulation test is carried out through the IEEE 30-bus system and the result shows that the algorithm has better efficiency and accuracy compared with other optimization algorithms.

  20. Heterologous protein secretion in Lactobacilli with modified pSIP vectors.

    Directory of Open Access Journals (Sweden)

    Ingrid Lea Karlskås

    Full Text Available We describe new variants of the modular pSIP-vectors for inducible gene expression and protein secretion in lactobacilli. The basic functionality of the pSIP system was tested in Lactobacillus strains representing 14 species using pSIP411, which harbors the broad-host-range Lactococcus lactis SH71rep replicon and a β-glucuronidase encoding reporter gene. In 10 species, the inducible gene expression system was functional. Based on these results, three pSIP vectors with different signal peptides were modified by replacing their narrow-host-range L. plantarum 256rep replicon with SH71rep and transformed into strains of five different species of Lactobacillus. All recombinant strains secreted the target protein NucA, albeit with varying production levels and secretion efficiencies. The Lp_3050 derived signal peptide generally resulted in the highest levels of secreted NucA. These modified pSIP vectors are useful tools for engineering a wide variety of Lactobacillus species.

  1. A Structurally Simplified Hybrid Model of Genetic Algorithm and Support Vector Machine for Prediction of Chlorophyll a in Reservoirs

    Directory of Open Access Journals (Sweden)

    Jieqiong Su

    2015-04-01

    Full Text Available With decreasing water availability as a result of climate change and human activities, analysis of the influential factors and variation trends of chlorophyll a has become important to prevent reservoir eutrophication and ensure water supply safety. In this paper, a structurally simplified hybrid model of the genetic algorithm (GA and the support vector machine (SVM was developed for the prediction of monthly concentration of chlorophyll a in the Miyun Reservoir of northern China over the period from 2000 to 2010. Based on the influence factor analysis, the four most relevant influence factors of chlorophyll a (i.e., total phosphorus, total nitrogen, permanganate index, and reservoir storage were extracted using the method of feature selection with the GA, which simplified the model structure, making it more practical and efficient for environmental management. The results showed that the developed simplified GA-SVM model could solve nonlinear problems of complex system, and was suitable for the simulation and prediction of chlorophyll a with better performance in accuracy and efficiency in the Miyun Reservoir.

  2. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  3. Density Based Support Vector Machines for Classification

    OpenAIRE

    Zahra Nazari; Dongshik Kang

    2015-01-01

    Support Vector Machines (SVM) is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification) of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used cl...

  4. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    Science.gov (United States)

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2016-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591

  5. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.

    Science.gov (United States)

    Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H

    2015-10-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel ® Xeon Phi ™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63 × on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7 × and 1.62 × , respectively, as compared to efficient CPU and GPU implementations.

  6. Some uses of the symmetric Lanczos algorithm - and why it works!

    Energy Technology Data Exchange (ETDEWEB)

    Druskin, V.L. [Schlumberger-Doll Research, Ridgefield, CT (United States); Greenbaum, A. [Courant Institute of Mathematical Sciences, New York, NY (United States); Knizhnerman, L.A. [Central Geophysical Expedition, Moscow (Russian Federation)

    1996-12-31

    The Lanczos algorithm uses a three-term recurrence to construct an orthonormal basis for the Krylov space corresponding to a symmetric matrix A and a starting vector q{sub 1}. The vectors and recurrence coefficients produced by this algorithm can be used for a number of purposes, including solving linear systems Au = {var_phi} and computing the matrix exponential e{sup -tA}{var_phi}. Although the vectors produced in finite precision arithmetic are not orthogonal, we show why they can still be used effectively for these purposes. The reason is that the 2-norm of the residual is essentially determined by the tridiagonal matrix and the next recurrence coefficient produced by the finite precision Lanczos computation. It follows that if the same tridiagonal matrix and recurrence coefficient are produced by the exact Lanczos algorithm applied to some other problem, then exact arithmetic bounds on the residual for that problem will hold for the finite precision computation. In order to establish exact arithmetic bounds for the different problem, it is necessary to have some information about the eigenvalues of the new coefficient matrix. Here we make use of information already established in the literature, and we also prove a new result for indefinite matrices.

  7. Lagrangian analysis of vector and tensor fields: Algorithmic foundations and applications in medical imaging and computational fluid dynamics

    OpenAIRE

    Ding, Zi'ang

    2016-01-01

    Both vector and tensor fields are important mathematical tools used to describe the physics of many phenomena in science and engineering. Effective vector and tensor field visualization techniques are therefore needed to interpret and analyze the corresponding data and achieve new insight into the considered problem. This dissertation is concerned with the extraction of important structural properties from vector and tensor datasets. Specifically, we present a unified approach for the charact...

  8. Surveillance of arthropod vector-borne infectious diseases using remote sensing techniques: a review.

    Directory of Open Access Journals (Sweden)

    Satya Kalluri

    2007-10-01

    Full Text Available Epidemiologists are adopting new remote sensing techniques to study a variety of vector-borne diseases. Associations between satellite-derived environmental variables such as temperature, humidity, and land cover type and vector density are used to identify and characterize vector habitats. The convergence of factors such as the availability of multi-temporal satellite data and georeferenced epidemiological data, collaboration between remote sensing scientists and biologists, and the availability of sophisticated, statistical geographic information system and image processing algorithms in a desktop environment creates a fertile research environment. The use of remote sensing techniques to map vector-borne diseases has evolved significantly over the past 25 years. In this paper, we review the status of remote sensing studies of arthropod vector-borne diseases due to mosquitoes, ticks, blackflies, tsetse flies, and sandflies, which are responsible for the majority of vector-borne diseases in the world. Examples of simple image classification techniques that associate land use and land cover types with vector habitats, as well as complex statistical models that link satellite-derived multi-temporal meteorological observations with vector biology and abundance, are discussed here. Future improvements in remote sensing applications in epidemiology are also discussed.

  9. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  10. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  11. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  12. On the Efficiency of Algorithms for Solving Hartree–Fock and Kohn–Sham Response Equations

    DEFF Research Database (Denmark)

    Kauczor, Joanna; Jørgensen, Poul; Norman, Patrick

    2011-01-01

    The response equations as occurring in the Hartree–Fock, multiconfigurational self-consistent field, and Kohn–Sham density functional theory have identical matrix structures. The algorithms that are used for solving these equations are discussed, and new algorithms are proposed where trial vectors...

  13. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  14. Analysis and Speed Ripple Mitigation of a Space Vector Pulse Width Modulation-Based Permanent Magnet Synchronous Motor with a Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2016-11-01

    Full Text Available A method is proposed for reducing speed ripple of permanent magnet synchronous motors (PMSMs controlled by space vector pulse width modulation (SVPWM. A flux graph and mathematics are used to analyze the speed ripple characteristics of the PMSM. Analysis indicates that the 6P (P refers to pole pairs of the PMSM time harmonic of rotor mechanical speed is the main harmonic component in the SVPWM control PMSM system. To reduce PMSM speed ripple, harmonics are superposed on a SVPWM reference signal. A particle swarm optimization (PSO algorithm is proposed to determine the optimal phase and multiplier coefficient of the superposed harmonics. The results of a Fourier decomposition and an optimized simulation model verified the accuracy of the analysis as well as the effectiveness of the speed ripple reduction methods, respectively.

  15. Knee replacement and Diagnosis-Related Groups (DRGs): patient classification and hospital reimbursement in 11 European countries.

    Science.gov (United States)

    Tan, Siok Swan; Chiarello, Pietro; Quentin, Wilm

    2013-11-01

    Researchers from 11 countries (Austria, England, Estonia, Finland, France, Germany, Ireland, Netherlands, Poland, Spain, and Sweden) compared how their Diagnosis-Related Group (DRG) systems deal with knee replacement cases. The study aims to assist knee surgeons and national authorities to optimize the grouping algorithm of their DRG systems. National or regional databases were used to identify hospital cases treated with a procedure of knee replacement. DRG classification algorithms and indicators of resource consumption were compared for those DRGs that together comprised at least 97 % of cases. Five standardized case scenarios were defined and quasi-prices according to national DRG-based hospital payment systems ascertained. Grouping algorithms for knee replacement vary widely across countries: they classify cases according to different variables (between one and five classification variables) into diverging numbers of DRGs (between one and five DRGs). Even the most expensive DRGs generally have a cost index below 2.00, implying that grouping algorithms do not adequately account for cases that are more than twice as costly as the index DRG. Quasi-prices for the most complex case vary between euro 4,920 in Estonia and euro 14,081 in Spain. Most European DRG systems were observed to insufficiently consider the most important determinants of resource consumption. Several countries' DRG system might be improved through the introduction of classification variables for revision of knee replacement or for the presence of complications or comorbidities. Ultimately, this would contribute to assuring adequate performance comparisons and fair hospital reimbursement on the basis of DRGs.

  16. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  17. Adeno-associated virus vectors can be efficiently produced without helper virus.

    Science.gov (United States)

    Matsushita, T; Elliger, S; Elliger, C; Podsakoff, G; Villarreal, L; Kurtzman, G J; Iwaki, Y; Colosi, P

    1998-07-01

    The purpose of this work was to develop an efficient method for the production of adeno-associated virus (AAV) vectors in the absence of helper virus. The adenovirus regions that mediate AAV vector replication were identified and assembled into a helper plasmid. These included the VA, E2A and E4 regions. When this helper plasmid was cotransfected into 293 cells, along with plasmids encoding the AAV vector, and rep and cap genes, AAV vector was produced as efficiently as when using adenovirus infection as a source of help. CMV-driven constructs expressing the E4orf6 and the 72-M(r), E2A proteins were able to functionally replace the E4 and E2A regions, respectively. Therefore the minimum set of genes required to produce AAV helper activity equivalent to that provided by adenovirus infection consists of, or is a subset of, the following genes: the E4orf6 gene, the 72-M(r), E2A protein gene, the VA RNA genes and the E1 region. AAV vector preparations made with adenovirus and by the helper virus-free method were essentially indistinguishable with respect to particle density, particle to infectivity ratio, capsimer ratio and efficiency of muscle transduction in vivo. Only AAV vector preparations made by the helper virus-free method were not reactive with anti-adenovirus sera.

  18. KOMPARASI MODEL SUPPORT VECTOR MACHINES (SVM DAN NEURAL NETWORK UNTUK MENGETAHUI TINGKAT AKURASI PREDIKSI TERTINGGI HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2017-09-01

    Full Text Available There are many types of investments to make money, one of which is in the form of shares. Shares is a trading company dealing with securities in the global capital markets. Stock Exchange or also called stock market is actually the activities of private companies in the form of buying and selling investments. To avoid losses in investing, we need a model of predictive analysis with high accuracy and supported by data - lots of data and accurately. The correct techniques in the analysis will be able to reduce the risk for investors in investing. There are many models used in the analysis of stock price movement prediction, in this study the researchers used models of neural networks (NN and a model of support vector machine (SVM. Based on the background of the problems that have been mentioned in the previous description it can be formulated the problem as follows: need an algorithm that can predict stock prices, and need a high accuracy rate by adding a data set on the prediction, two algorithms will be investigated expected results last researchers can deduce where the algorithm accuracy rate predictions are the highest or accurate, then the purpose of this study was to mengkomparasi or compare between the two algorithms are algorithms Neural Network algorithm and Support Vector Machine which later on the end result has an accuracy rate forecast stock prices highest to see the error value RMSEnya. After doing research using the model of neural network and model of support vector machine (SVM to predict the stock using the data value of the shares on the stock index hongkong dated July 20, 2016 at 16:26 pm until the date of 15 September 2016 at 17:40 pm as many as 729 data sets within an interval of 5 minute through a process of training, learning, and then continue the process of testing so the result is that by using a neural network model of the prediction accuracy of 0.503 +/- 0.009 (micro 503 while using the model of support vector machine

  19. Killing tensors and conformal Killing tensors from conformal Killing vectors

    International Nuclear Information System (INIS)

    Rani, Raffaele; Edgar, S Brian; Barnes, Alan

    2003-01-01

    Koutras has proposed some methods to construct reducible proper conformal Killing tensors and Killing tensors (which are, in general, irreducible) when a pair of orthogonal conformal Killing vectors exist in a given space. We give the completely general result demonstrating that this severe restriction of orthogonality is unnecessary. In addition, we correct and extend some results concerning Killing tensors constructed from a single conformal Killing vector. A number of examples demonstrate that it is possible to construct a much larger class of reducible proper conformal Killing tensors and Killing tensors than permitted by the Koutras algorithms. In particular, by showing that all conformal Killing tensors are reducible in conformally flat spaces, we have a method of constructing all conformal Killing tensors, and hence all the Killing tensors (which will in general be irreducible) of conformally flat spaces using their conformal Killing vectors

  20. A program for computing cohomology of Lie superalgebras of vector fields

    International Nuclear Information System (INIS)

    Kornyak, V.V.

    1998-01-01

    An algorithm and its C implementation for computing the cohomology of Lie algebras and superalgebras is described. When elaborating the algorithm we paid primary attention to cohomology in trivial, adjoint and coadjoint modules for Lie algebras and superalgebras of the formal vector fields. These algebras have found many applications to modern supersymmetric models of theoretical and mathematical physics. As an example, we present 3- and 5-cocycles from the cohomology in the trivial module for the Poisson algebra Po (2), as found by computer

  1. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  2. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    Science.gov (United States)

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  3. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    Directory of Open Access Journals (Sweden)

    Xiang-ming Gao

    2017-01-01

    Full Text Available Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD and support vector machine (SVM optimized with an artificial bee colony (ABC algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  4. A Novel Support Vector Machine with Globality-Locality Preserving

    Directory of Open Access Journals (Sweden)

    Cheng-Long Ma

    2014-01-01

    Full Text Available Support vector machine (SVM is regarded as a powerful method for pattern classification. However, the solution of the primal optimal model of SVM is susceptible for class distribution and may result in a nonrobust solution. In order to overcome this shortcoming, an improved model, support vector machine with globality-locality preserving (GLPSVM, is proposed. It introduces globality-locality preserving into the standard SVM, which can preserve the manifold structure of the data space. We complete rich experiments on the UCI machine learning data sets. The results validate the effectiveness of the proposed model, especially on the Wine and Iris databases; the recognition rate is above 97% and outperforms all the algorithms that were developed from SVM.

  5. No evidence for the use of DIR, D-D fusions, chromosome 15 open reading frames or VH replacement in the peripheral repertoire was found on application of an improved algorithm, JointML, to 6329 human immunoglobulin H rearrangements

    DEFF Research Database (Denmark)

    Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R

    2006-01-01

    gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum...

  6. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  7. Using Geometrical Properties for Fast Indexation of Gaussian Vector Quantizers

    Directory of Open Access Journals (Sweden)

    Vassilieva EA

    2007-01-01

    Full Text Available Vector quantization is a classical method used in mobile communications. Each sequence of samples of the discretized vocal signal is associated to the closest -dimensional codevector of a given set called codebook. Only the binary indices of these codevectors (the codewords are transmitted over the channel. Since channels are generally noisy, the codewords received are often slightly different from the codewords sent. In order to minimize the distortion of the original signal due to this noisy transmission, codevectors indexed by one-bit different codewords should have a small mutual Euclidean distance. This paper is devoted to this problem of index assignment of binary codewords to the codevectors. When the vector quantizer has a Gaussian structure, we show that a fast index assignment algorithm based on simple geometrical and combinatorial considerations can improve the SNR at the receiver by 5dB with respect to a purely random assignment. We also show that in the Gaussian case this algorithm outperforms the classical combinatorial approach in the field.

  8. Subspace identification of Hammer stein models using support vector machines

    International Nuclear Information System (INIS)

    Al-Dhaifallah, Mujahed

    2011-01-01

    System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.

  9. Multiplex protein pattern unmixing using a non-linear variable-weighted support vector machine as optimized by a particle swarm optimization algorithm.

    Science.gov (United States)

    Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin

    2016-01-15

    Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  11. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  12. Efficient Vector-Based Forwarding for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Xie

    2010-01-01

    Full Text Available Underwater Sensor Networks (UWSNs are significantly different from terrestrial sensor networks in the following aspects: low bandwidth, high latency, node mobility, high error probability, and 3-dimensional space. These new features bring many challenges to the network protocol design of UWSNs. In this paper, we tackle one fundamental problem in UWSNs: robust, scalable, and energy efficient routing. We propose vector-based forwarding (VBF, a geographic routing protocol. In VBF, the forwarding path is guided by a vector from the source to the target, no state information is required on the sensor nodes, and only a small fraction of the nodes is involved in routing. To improve the robustness, packets are forwarded in redundant and interleaved paths. Further, a localized and distributed self-adaptation algorithm allows the nodes to reduce energy consumption by discarding redundant packets. VBF performs well in dense networks. For sparse networks, we propose a hop-by-hop vector-based forwarding (HH-VBF protocol, which adapts the vector-based approach at every hop. We evaluate the performance of VBF and HH-VBF through extensive simulations. The simulation results show that VBF achieves high packet delivery ratio and energy efficiency in dense networks and HH-VBF has high packet delivery ratio even in sparse networks.

  13. Optimization of Support Vector Machine (SVM) for Object Classification

    Science.gov (United States)

    Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.

  14. Predicting Solar Flares Using SDO /HMI Vector Magnetic Data Products and the Random Forest Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang; Deng, Na; Wang, Haimin [Space Weather Research Laboratory, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States); Wang, Jason T. L., E-mail: chang.liu@njit.edu, E-mail: na.deng@njit.edu, E-mail: haimin.wang@njit.edu, E-mail: jason.t.wang@njit.edu [Department of Computer Science, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States)

    2017-07-10

    Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.

  15. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  16. A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Chun Wang

    2017-01-01

    Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

  17. Frequency-Dependent FDTD Algorithm Using Newmark’s Method

    Directory of Open Access Journals (Sweden)

    Bing Wei

    2014-01-01

    Full Text Available According to the characteristics of the polarizability in frequency domain of three common models of dispersive media, the relation between the polarization vector and electric field intensity is converted into a time domain differential equation of second order with the polarization vector by using the conversion from frequency to time domain. Newmark βγ difference method is employed to solve this equation. The electric field intensity to polarizability recursion is derived, and the electric flux to electric field intensity recursion is obtained by constitutive relation. Then FDTD iterative computation in time domain of electric and magnetic field components in dispersive medium is completed. By analyzing the solution stability of the above differential equation using central difference method, it is proved that this method has more advantages in the selection of time step. Theoretical analyses and numerical results demonstrate that this method is a general algorithm and it has advantages of higher accuracy and stability over the algorithms based on central difference method.

  18. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  19. A NEW IMAGE RETRIEVAL ALGORITHM BASED ON VECTOR QUANTIFICATION%一种新的基于矢量量化的图像检索算法

    Institute of Scientific and Technical Information of China (English)

    冀鑫; 冀小平

    2016-01-01

    针对目前基于颜色的图像检索算法在颜色特征提取的不足,提出一种新的颜色特征提取算法。利用 LBG 算法对 HSI 空间的颜色信息矢量量化,然后统计图像中各个码字出现的频数,形成颜色直方图。这样在提取颜色特征过程中,尽可能地降低图像原有特征失真。同时通过设定门限值,多次实验比较查全率和查准率,找到较为满意的门限值,使检索算法更加完善。实验结果表明,该算法能有效地提高图像检索精准度。%We put forward a new colour feature extraction algorithm for the shortcoming of present colour-based image retrieval algorithm in colour feature extraction.First,the algorithm uses LBG algorithm to carry out vector quantification on colour information in HSI space,and then counts the appearance frequency of each code word in the image to form colour histogram.So in the process of colour feature extraction the distortion of original image features can be reduced as far as possible.Meanwhile,by setting the threshold value we compared the recall and precision rates through a couple of the experiments until a satisfied threshold value was found,thus made the retrieval method more perfect.Experimental results showed that the new algorithm could effectively improve the accuracy of image retrieval.

  20. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    Science.gov (United States)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  1. Detection of cracks in shafts with the Approximated Entropy algorithm

    Science.gov (United States)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  2. Linearized vector radiative transfer model MCC++ for a spherical atmosphere

    International Nuclear Information System (INIS)

    Postylyakov, O.V.

    2004-01-01

    Application of radiative transfer models has shown that optical remote sensing requires extra characteristics of radiance field in addition to the radiance intensity itself. Simulation of spectral measurements, analysis of retrieval errors and development of retrieval algorithms are in need of derivatives of radiance with respect to atmospheric constituents under investigation. The presented vector spherical radiative transfer model MCC++ was linearized, which allows the calculation of derivatives of all elements of the Stokes vector with respect to the volume absorption coefficient simultaneously with radiance calculation. The model MCC++ employs Monte Carlo algorithm for radiative transfer simulation and takes into account aerosol and molecular scattering, gas and aerosol absorption, and Lambertian surface albedo. The model treats a spherically symmetrical atmosphere. Relation of the estimated derivatives with other forms of radiance derivatives: the weighting functions used in gas retrieval and the air mass factors used in the DOAS retrieval algorithms, is obtained. Validation of the model against other radiative models is overviewed. The computing time of the intensity for the MCC++ model is about that for radiative models treating sphericity of the atmosphere approximately and is significantly shorter than that for the full spherical models used in the comparisons. The simultaneous calculation of all derivatives (i.e. with respect to absorption in all model atmosphere layers) and the intensity is only 1.2-2 times longer than the calculation of the intensity only

  3. Integration of irradiation with cytoplasmic incompatibility to facilitate a lymphatic filariasis vector elimination approach

    Directory of Open Access Journals (Sweden)

    Dobson Stephen L

    2009-08-01

    Full Text Available Abstract Background Mass drug administration (MDA is the emphasis of an ongoing global lymphatic filariasis (LF elimination program by the World Health Organization, in which the entire 'at risk' human population is treated annually with anti-filarial drugs. However, there is evidence that the MDA strategy may not be equally appropriate in all areas of LF transmission, leading to calls for the augmentation of MDA with anti-vector interventions. One potential augmentative intervention is the elimination of vectors via repeated inundative releases of male mosquitoes made cytoplasmically incompatible via an infection with Wolbachia bacteria. However, with a reduction in the vector population size, there is the risk that an accidental female release would permit the establishment of the incompatible Wolbachia infection type, resulting in population replacement instead of population elimination. To avoid the release of fertile females, we propose the exposure of release individuals to low doses of radiation to sterilize any accidentally released females, reducing the risk of population replacement. Results Aedes polynesiensis pupae of differing ages were irradiated to determine a radiation dose that results in sterility but that does not affect the survival and competitiveness of males. Laboratory assays demonstrate that males irradiated at a female sterilizing dosage of 40 Gy are equally competitive with un-irradiated males. No effect of irradiation on the ability of Wolbachia to affect egg hatch was observed. Conclusion An irradiation dose of 40 Gy is sufficient to cause female sterility, but has no observed negative effect on male fitness. The results support further development of this approach as a preventative measure against accidental population replacement.

  4. VECTOR TOMOGRAPHY FOR THE CORONAL MAGNETIC FIELD. II. HANLE EFFECT MEASUREMENTS

    International Nuclear Information System (INIS)

    Kramar, M.; Inhester, B.; Lin, H.; Davila, J.

    2013-01-01

    In this paper, we investigate the feasibility of saturated coronal Hanle effect vector tomography or the application of vector tomographic inversion techniques to reconstruct the three-dimensional magnetic field configuration of the solar corona using linear polarization measurements of coronal emission lines. We applied Hanle effect vector tomographic inversion to artificial data produced from analytical coronal magnetic field models with equatorial and meridional currents and global coronal magnetic field models constructed by extrapolation of real photospheric magnetic field measurements. We tested tomographic inversion with only Stokes Q, U, electron density, and temperature inputs to simulate observations over large limb distances where the Stokes I parameters are difficult to obtain with ground-based coronagraphs. We synthesized the coronal linear polarization maps by inputting realistic noise appropriate for ground-based observations over a period of two weeks into the inversion algorithm. We found that our Hanle effect vector tomographic inversion can partially recover the coronal field with a poloidal field configuration, but that it is insensitive to a corona with a toroidal field. This result demonstrates that Hanle effect vector tomography is an effective tool for studying the solar corona and that it is complementary to Zeeman effect vector tomography for the reconstruction of the coronal magnetic field

  5. Monte Carlo simulation of Ising models by multispin coding on a vector computer

    Science.gov (United States)

    Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus

    1984-11-01

    Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.

  6. Novel method of finding extreme edges in a convex set of N-dimension vectors

    Science.gov (United States)

    Hu, Chia-Lun J.

    2001-11-01

    As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

  7. Design and implementation of predictive current control of three-phase PWM rectifier using space-vector modulation (SVM)

    International Nuclear Information System (INIS)

    Bouafia, Abdelouahab; Gaubert, Jean-Paul; Krim, Fateh

    2010-01-01

    This paper is concerned with the design and implementation of current control of three-phase PWM rectifier based on predictive control strategy. The proposed predictive current control technique operates with constant switching frequency, using space-vector modulation (SVM). The main goal of the designed current control scheme is to maintain the dc-bus voltage at the required level and to achieve the unity power factor (UPF) operation of the converter. For this purpose, two predictive current control algorithms, in the sense of deadbeat control, are developed for direct controlling input current vector of the converter in the stationary α-β and rotating d-q reference frame, respectively. For both predictive current control algorithms, at the beginning of each switching period, the required rectifier average voltage vector allowing the cancellation of both tracking errors of current vector components at the end of the switching period, is computed and applied during a predefined switching period by means of SVM. The main advantages of the proposed predictive current control are that no need to use hysteresis comparators or PI controllers in current control loops, and constant switching frequency. Finally, the developed predictive current control algorithms were tested both in simulations and experimentally, and illustrative results are presented here. Results have proven excellent performance in steady and transient states, and verify the validity of the proposed predictive current control which is compared to other control strategies.

  8. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  9. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  10. An algorithmic characterization of P-matricity

    OpenAIRE

    Ben Gharbia , Ibtihel; Gilbert , Jean Charles

    2013-01-01

    International audience; It is shown that a matrix M is a P-matrix if and only if, whatever is the vector q, the Newton-min algorithm does not cycle between two points when it is used to solve the linear complementarity problem 0 ≤ x ⊥ (Mx+q) ≥ 0.; Nous montrons dans cet article qu'une matrice M est une P-matrice si, et seulement si, quel que soit le vecteur q, l'algorithme de Newton-min ne fait pas de cycle de deux points lorsqu'il est utilisé pour résoudre le problème de compl\\émentarité lin...

  11. Short-Term Wind Speed Forecasting Using the Data Processing Approach and the Support Vector Machine Model Optimized by the Improved Cuckoo Search Parameter Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Chen Wang

    2016-01-01

    Full Text Available Power systems could be at risk when the power-grid collapse accident occurs. As a clean and renewable resource, wind energy plays an increasingly vital role in reducing air pollution and wind power generation becomes an important way to produce electrical power. Therefore, accurate wind power and wind speed forecasting are in need. In this research, a novel short-term wind speed forecasting portfolio has been proposed using the following three procedures: (I data preprocessing: apart from the regular normalization preprocessing, the data are preprocessed through empirical model decomposition (EMD, which reduces the effect of noise on the wind speed data; (II artificially intelligent parameter optimization introduction: the unknown parameters in the support vector machine (SVM model are optimized by the cuckoo search (CS algorithm; (III parameter optimization approach modification: an improved parameter optimization approach, called the SDCS model, based on the CS algorithm and the steepest descent (SD method is proposed. The comparison results show that the simple and effective portfolio EMD-SDCS-SVM produces promising predictions and has better performance than the individual forecasting components, with very small root mean squared errors and mean absolute percentage errors.

  12. Influence of the velocity vector base relocation to the center of mass of the interrogation area on PIV accuracy

    Directory of Open Access Journals (Sweden)

    Kouba Jan

    2014-03-01

    Full Text Available This paper is aimed at modification of calculation algorithm used in data processing from PIV (Particle Image Velocimetry method. The modification of standard Multi-step correlation algorithm is based on imaging the centre of mass of the interrogation area to define the initial point of the respective vector, instead of the geometrical centre. This paper describes the principle of initial point-vector assignment, the corresponding data processing methodology including the test track analysis. Both approaches are compared within the framework of accuracy in the conclusion. The accuracy test is performed using synthetic and real data.

  13. Vector Radix 2 × 2 Sliding Fast Fourier Transform

    Directory of Open Access Journals (Sweden)

    Keun-Yung Byun

    2016-01-01

    Full Text Available The two-dimensional (2D discrete Fourier transform (DFT in the sliding window scenario has been successfully used for numerous applications requiring consecutive spectrum analysis of input signals. However, the results of conventional sliding DFT algorithms are potentially unstable because of the accumulated numerical errors caused by recursive strategy. In this letter, a stable 2D sliding fast Fourier transform (FFT algorithm based on the vector radix (VR 2 × 2 FFT is presented. In the VR-2 × 2 FFT algorithm, each 2D DFT bin is hierarchically decomposed into four sub-DFT bins until the size of the sub-DFT bins is reduced to 2 × 2; the output DFT bins are calculated using the linear combination of the sub-DFT bins. Because the sub-DFT bins for the overlapped input signals between the previous and current window are the same, the proposed algorithm reduces the computational complexity of the VR-2 × 2 FFT algorithm by reusing previously calculated sub-DFT bins in the sliding window scenario. Moreover, because the resultant DFT bins are identical to those of the VR-2 × 2 FFT algorithm, numerical errors do not arise; therefore, unconditional stability is guaranteed. Theoretical analysis shows that the proposed algorithm has the lowest computational requirements among the existing stable sliding DFT algorithms.

  14. Accuracy Analysis of Lunar Lander Terminal Guidance Algorithm

    Directory of Open Access Journals (Sweden)

    E. K. Li

    2017-01-01

    Full Text Available This article studies a proposed analytical algorithm of the terminal guidance for the lunar lander. The analytical solution, which forms the basis of the algorithm, was obtained for a constant acceleration trajectory and thrust vector orientation programs that are essentially linear with time. The main feature of the proposed algorithm is a completely analytical solution to provide the lander terminal guidance to the desired spot in 3D space when landing on the atmosphereless body with no numerical procedures. To reach 6 terminal conditions (components of position and velocity vectors at the final time are used 6 guidance law parameters, namely time-to-go, desired value of braking deceleration, initial values of pitch and yaw angles and rates of their change. In accordance with the principle of flexible trajectories, this algorithm assumes the implementation of a regularly updated control program that ensures reaching terminal conditions from the current state that corresponds to the control program update time. The guidance law parameters, which ensure that terminal conditions are reached, are generated as a function of the current phase coordinates of a lander. The article examines an accuracy and reliability of the proposed analytical algorithm that provides the terminal guidance of the lander in 3D space through mathematical modeling of the lander guidance from the circumlunar pre-landing orbit to the desired spot near the lunar surface. A desired terminal position of the lunar lander is specified by the selenographic latitude, longitude and altitude above the lunar surface. The impact of variations in orbital parameters on the terminal guidance accuracy has been studied. By varying the five initial orbit parameters (obliquity, ascending node longitude, argument of periapsis, periapsis height, apoapsis height when the terminal spot is fixed the statistic characteristics of the terminal guidance algorithm error according to the terminal

  15. Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.

    Science.gov (United States)

    Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio

    2016-02-29

    Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.

  16. An algorithm for constructing Lyapunov functions

    Directory of Open Access Journals (Sweden)

    Sigurdur Freyr Hafstein

    2007-08-01

    Full Text Available In this monograph we develop an algorithm for constructing Lyapunov functions for arbitrary switched dynamical systems $dot{mathbf{x}} = mathbf{f}_sigma(t,mathbf{x}$, possessing a uniformly asymptotically stable equilibrium. Let $dot{mathbf{x}}=mathbf{f}_p(t,mathbf{x}$, $pinmathcal{P}$, be the collection of the ODEs, to which the switched system corresponds. The number of the vector fields $mathbf{f}_p$ on the right-hand side of the differential equation is assumed to be finite and we assume that their components $f_{p,i}$ are $mathcal{C}^2$ functions and that we can give some bounds, not necessarily close, on their second-order partial derivatives. The inputs of the algorithm are solely a finite number of the function values of the vector fields $mathbf{f}_p$ and these bounds. The domain of the Lyapunov function constructed by the algorithm is only limited by the size of the equilibrium's region of attraction. Note, that the concept of a Lyapunov function for the arbitrary switched system $dot{mathbf{x}} = mathbf{f}_sigma(t,mathbf{x}$ is equivalent to the concept of a common Lyapunov function for the systems $dot{mathbf{x}}=mathbf{f}_p(t,mathbf{x}$, $pinmathcal{P}$, and that if $mathcal{P}$ contains exactly one element, then the switched system is just a usual ODE $dot{mathbf{x}}=mathbf{f}(t,mathbf{x}$. We give numerous examples of Lyapunov functions constructed by our method at the end of this monograph.

  17. Consistences for introducing more vector potentials in the same group, by BRST algorithm

    International Nuclear Information System (INIS)

    Doria, R.; Carvalho, F.A.R. de

    1989-01-01

    The BRS formalism for quantum formulation of gauge theory is analysed applying to extended models. The quantum effective Lagrangian of gauge is established, invariant under s and s→ for a system with vector potentials belong to one Abelian group of gauge. The BRS charge associated to the system is calculated. (M.C.K.)

  18. Support vector regression model based predictive control of water level of U-tube steam generators

    Energy Technology Data Exchange (ETDEWEB)

    Kavaklioglu, Kadir, E-mail: kadir.kavaklioglu@pau.edu.tr

    2014-10-15

    Highlights: • Water level of U-tube steam generators was controlled in a model predictive fashion. • Models for steam generator water level were built using support vector regression. • Cost function minimization for future optimal controls was performed by using the steepest descent method. • The results indicated the feasibility of the proposed method. - Abstract: A predictive control algorithm using support vector regression based models was proposed for controlling the water level of U-tube steam generators of pressurized water reactors. Steam generator data were obtained using a transfer function model of U-tube steam generators. Support vector regression based models were built using a time series type model structure for five different operating powers. Feedwater flow controls were calculated by minimizing a cost function that includes the level error, the feedwater change and the mismatch between feedwater and steam flow rates. Proposed algorithm was applied for a scenario consisting of a level setpoint change and a steam flow disturbance. The results showed that steam generator level can be controlled at all powers effectively by the proposed method.

  19. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    OpenAIRE

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic...

  20. TEACHING ALGORITHMIZATION AND PROGRAMMING USING PYTHON LANGUAGE

    Directory of Open Access Journals (Sweden)

    M. Lvov

    2014-07-01

    Full Text Available The article describes requirements to educational programming languages and considers the use of Python as the first programming language. The issues of introduction of this programming language into teaching and replacing Pascal by Python are examined. The advantages of such approach are regarded. The comparison of popular programming languages is represented from the point of view of their convenience of use for teaching algorithmization and programming. Python supports lots of programming paradigms: structural, object-oriented, functional, imperative and aspect-oriented, and learning can be started without any preparation. There is one more advantage of the language: all algorithms are written easily and structurally in Python. Therefore, due to all mentioned above, it is possible to affirm that Python pretends to become a decent replacement for educational programming language PASCAL both at schools and on the first courses of higher education establishments.

  1. Alignment of Custom Standards by Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Adela Sirbu

    2010-09-01

    Full Text Available Building an efficient model for automatic alignment of terminologies would bring a significant improvement to the information retrieval process. We have developed and compared two machine learning based algorithms whose aim is to align 2 custom standards built on a 3 level taxonomy, using kNN and SVM classifiers that work on a vector representation consisting of several similarity measures. The weights utilized by the kNN were optimized with an evolutionary algorithm, while the SVM classifier's hyper-parameters were optimized with a grid search algorithm. The database used for train was semi automatically obtained by using the Coma++ tool. The performance of our aligners is shown by the results obtained on the test set.

  2. A Cooperative Harmony Search Algorithm for Function Optimization

    Directory of Open Access Journals (Sweden)

    Gang Li

    2014-01-01

    Full Text Available Harmony search algorithm (HS is a new metaheuristic algorithm which is inspired by a process involving musical improvisation. HS is a stochastic optimization technique that is similar to genetic algorithms (GAs and particle swarm optimizers (PSOs. It has been widely applied in order to solve many complex optimization problems, including continuous and discrete problems, such as structure design, and function optimization. A cooperative harmony search algorithm (CHS is developed in this paper, with cooperative behavior being employed as a significant improvement to the performance of the original algorithm. Standard HS just uses one harmony memory and all the variables of the object function are improvised within the harmony memory, while the proposed algorithm CHS uses multiple harmony memories, so that each harmony memory can optimize different components of the solution vector. The CHS was then applied to function optimization problems. The results of the experiment show that CHS is capable of finding better solutions when compared to HS and a number of other algorithms, especially in high-dimensional problems.

  3. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  4. Representation and display of vector field topology in fluid flow data sets

    Science.gov (United States)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  5. The aspect of vector control using the asynchronous traction motor in locomotives

    Directory of Open Access Journals (Sweden)

    L. Liudvinavičius

    2009-12-01

    Full Text Available The article examines curves controlling asynchronous traction motors increasingly used in locomotive electric drives the main task of which is to create a tractive effort-speed curve of an ideal locomotive Fk = f(v, including a hyperbolic area the curve of which will create conditions showing that energy created by the diesel engine of diesel locomotives (electric locomotives and in case of electric trains, electricity taken from the contact network over the entire range of locomotive speed is turned into efficient work. Mechanical power on wheel sets is constant Pk = Fkv = const, the power of the diesel engine is fully used over the entire range of locomotive speed. Tractive effort-speed curve Fk(v shows the dependency of locomotive traction power Fk on movement speed v. The article presents theoretical and practical aspects relevant to creating the structure of locomotive electric drive and selecting optimal control that is especially relevant to creating the structure of locomotive electric drive using ATM (asynchronous traction motor that gains special popularity in traction rolling stock replacing DC traction motors having low reliability. The frequency modes of asynchronous motor speed regulation are examined. To control ATM, the authors suggest the method of vector control presenting the structural schemes of a locomotive with ATM and control algorithm.

  6. ALGORITHMIZATION OF PROBLEMS FOR OPTIMAL LOCATION OF TRANSFORMERS IN SUBSTATIONS OF DISTRIBUTED NETWORKS

    Directory of Open Access Journals (Sweden)

    M. I. Fursanov

    2014-01-01

    Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used  in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.

  7. BONDI-97 A novel neutron energy spectrum unfolding tool using a genetic algorithm

    CERN Document Server

    Mukherjee, B

    1999-01-01

    The neutron spectrum unfolding procedure using the count rate data obtained from a set of Bonner sphere neutron detectors requires the solution of the Fredholm integral equation of the first kind by using complex mathematical methods. This paper reports a new approach for the unfolding of neutron spectra using the Genetic Algorithm tool BONDI-97 (BOnner sphere Neutron DIfferentiation). The BONDI-97 was used as the input for Genetic Algorithm engine EVOLVER to search for a globally optimised solution vector from a population of randomly generated solutions. This solution vector corresponds to the unfolded neutron energy spectrum. The Genetic Algorithm engine emulates the Darwinian 'Survival of the Fittest' strategy, the key ingredient of the 'Theory of Evolution'. The spectra of sup 2 sup 4 sup 1 Am/Be (alpha,n) and sup 2 sup 3 sup 9 Pu/Be (alpha,n) neutron sources were unfolded using the BONDI-97 tool. (author)

  8. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  9. Traditional vectors as an introduction to geometric algebra

    International Nuclear Information System (INIS)

    Carroll, J E

    2003-01-01

    The 2002 Oersted Medal Lecture by David Hestenes concerns the many advantages for education in physics if geometric algebra were to replace standard vector algebra. However, such a change has difficulties for those who have been taught traditionally. A new way of introducing geometric algebra is presented here using a four-element array composed of traditional vector and scalar products. This leads to an explicit 4 x 4 matrix representation which contains key requirements for three-dimensional geometric algebra. The work can be extended to include Maxwell's equations where it is found that curl and divergence appear naturally together. However, to obtain an explicit representation of space-time algebra with the correct behaviour under Lorentz transformations, an 8 x 8 matrix representation has to be formed. This leads to a Dirac representation of Maxwell's equations showing that space-time algebra has hidden within its formalism the symmetry of 'parity, charge conjugation and time reversal'

  10. Emerging Vector-Borne Diseases - Incidence through Vectors.

    Science.gov (United States)

    Savić, Sara; Vidić, Branka; Grgić, Zivoslav; Potkonjak, Aleksandar; Spasojevic, Ljubica

    2014-01-01

    Vector-borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowadays, in intercontinental countries, there is a struggle with emerging diseases, which have found their way to appear through vectors. Vector-borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens, and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector-borne infectious diseases and disease outbreaks. It could affect the range and population of pathogens, host and vectors, transmission season, etc. Reliable surveillance for diseases that are most likely to emerge is required. Canine vector-borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, ehrlichiosis, and leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fundamental role at primarily prevention and then treatment of vector-borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases. During a 4-year period, from 2009 to 2013, a total number of 551 dog samples were analyzed for vector-borne diseases (borreliosis, babesiosis, ehrlichiosis, anaplasmosis, dirofilariosis, and leishmaniasis) in routine laboratory work. The analysis was done by serological tests - ELISA for borreliosis, dirofilariosis, and leishmaniasis, modified Knott test for dirofilariosis, and blood smear for babesiosis, ehrlichiosis, and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on average more then half of the samples

  11. Overcoming artificial spatial correlations in simulations of superstructure domain growth with parallel Monte Carlo algorithms

    International Nuclear Information System (INIS)

    Schleier, W.; Besold, G.; Heinz, K.

    1992-01-01

    The authors study the applicability of parallelized/vectorized Monte Carlo (MC) algorithms to the simulation of domain growth in two-dimensional lattice gas models undergoing an ordering process after a rapid quench below an order-disorder transition temperature. As examples they consider models with 2 x 1 and c(2 x 2) equilibrium superstructures on the square and rectangular lattices, respectively. They also study the case of phase separation ('1 x 1' islands) on the square lattice. A generalized parallel checkerboard algorithm for Kawasaki dynamics is shown to give rise to artificial spatial correlations in all three models. However, only if superstructure domains evolve do these correlations modify the kinetics by influencing the nucleation process and result in a reduced growth exponent compared to the value from the conventional heat bath algorithm with random single-site updates. In order to overcome these artificial modifications, two MC algorithms with a reduced degree of parallelism ('hybrid' and 'mask' algorithms, respectively) are presented and applied. As the results indicate, these algorithms are suitable for the simulation of superstructure domain growth on parallel/vector computers. 60 refs., 10 figs., 1 tab

  12. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    Science.gov (United States)

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  13. Application of XGBoost algorithm in hourly PM2.5 concentration prediction

    Science.gov (United States)

    Pan, Bingyue

    2018-02-01

    In view of prediction techniques of hourly PM2.5 concentration in China, this paper applied the XGBoost(Extreme Gradient Boosting) algorithm to predict hourly PM2.5 concentration. The monitoring data of air quality in Tianjin city was analyzed by using XGBoost algorithm. The prediction performance of the XGBoost method is evaluated by comparing observed and predicted PM2.5 concentration using three measures of forecast accuracy. The XGBoost method is also compared with the random forest algorithm, multiple linear regression, decision tree regression and support vector machines for regression models using computational results. The results demonstrate that the XGBoost algorithm outperforms other data mining methods.

  14. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  15. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    Science.gov (United States)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  16. Implementing the conjugate gradient algorithm on multi-core systems

    NARCIS (Netherlands)

    Wiggers, W.A.; Bakker, Vincent; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria; Nurmi, J.; Takala, J.; Vainio, O.

    2007-01-01

    In linear solvers, like the conjugate gradient algorithm, sparse-matrix vector multiplication is an important kernel. Due to the sparseness of the matrices, the solver runs relatively slow. For digital optical tomography (DOT), a large set of linear equations have to be solved which currently takes

  17. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs

    Directory of Open Access Journals (Sweden)

    Guixia He

    2016-01-01

    Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.

  18. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  19. Vector-Tensor and Vector-Vector Decay Amplitude Analysis of B0→φK*0

    International Nuclear Information System (INIS)

    Aubert, B.; Bona, M.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P.; Poireau, V.; Tisserand, V.; Zghiche, A.; Grauges, E.; Palano, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.

    2007-01-01

    We perform an amplitude analysis of the decays B 0 →φK 2 * (1430) 0 , φK * (892) 0 , and φ(Kπ) S-wave 0 with a sample of about 384x10 6 BB pairs recorded with the BABAR detector. The fractions of longitudinal polarization f L of the vector-tensor and vector-vector decay modes are measured to be 0.853 -0.069 +0.061 ±0.036 and 0.506±0.040±0.015, respectively. Overall, twelve parameters are measured for the vector-vector decay and seven parameters for the vector-tensor decay, including the branching fractions and parameters sensitive to CP violation

  20. CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

    Science.gov (United States)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-04-01

    CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

  1. Vectorization of a Monte Carlo simulation scheme for nonequilibrium gas dynamics

    Science.gov (United States)

    Boyd, Iain D.

    1991-01-01

    Significant improvement has been obtained in the numerical performance of a Monte Carlo scheme for the analysis of nonequilibrium gas dynamics through an implementation of the algorithm which takes advantage of vector hardware, as presently demonstrated through application to three different problems. These are (1) a 1D standing-shock wave; (2) the flow of an expanding gas through an axisymmetric nozzle; and (3) the hypersonic flow of Ar gas over a 3D wedge. Problem (3) is illustrative of the greatly increased number of molecules which the simulation may involve, thanks to improved algorithm performance.

  2. Discrete Spin Vector Approach for Monte Carlo-based Magnetic Nanoparticle Simulations

    Science.gov (United States)

    Senkov, Alexander; Peralta, Juan; Sahay, Rahul

    The study of magnetic nanoparticles has gained significant popularity due to the potential uses in many fields such as modern medicine, electronics, and engineering. To study the magnetic behavior of these particles in depth, it is important to be able to model and simulate their magnetic properties efficiently. Here we utilize the Metropolis-Hastings algorithm with a discrete spin vector model (in contrast to the standard continuous model) to model the magnetic hysteresis of a set of protected pure iron nanoparticles. We compare our simulations with the experimental hysteresis curves and discuss the efficiency of our algorithm.

  3. New accountant job market reform by computer algorithm: an experimental study

    Directory of Open Access Journals (Sweden)

    Hirose Yoshitaka

    2017-01-01

    Full Text Available The purpose of this study is to examine the matching of new accountants with accounting firms in Japan. A notable feature of the present study is that it brings a computer algorithm to the job-hiring task. Job recruitment activities for new accountants in Japan are one-time, short-term struggles. Accordingly, many have searched for new rules to replace the current ones of the process. Job recruitment activities for new accountants in Japan change every year. This study proposes modifying these job recruitment activities by combining computer and human efforts. Furthermore, the study formulates the job recruitment activities by using a model and conducting experiments. As a result, the Deferred Acceptance (DA algorithm derives a high truth-telling percentage, a stable matching percentage, and greater efficiency compared with the previous approach. This suggests the potential of the Deferred Acceptance algorithm as a replacement for current approaches. In terms of accurate percentage and stability, the DA algorithm is superior to the current methods and should be adopted.

  4. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  5. Algorithmic Approach to Abstracting Linear Systems by Timed Automata

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2011-01-01

    This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...

  6. Prediction of protein binding sites using physical and chemical descriptors and the support vector machine regression method

    International Nuclear Information System (INIS)

    Sun Zhong-Hua; Jiang Fan

    2010-01-01

    In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)

  7. Chaos control of ferroresonance system based on RBF-maximum entropy clustering algorithm

    International Nuclear Information System (INIS)

    Liu Fan; Sun Caixin; Sima Wenxia; Liao Ruijin; Guo Fei

    2006-01-01

    With regards to the ferroresonance overvoltage of neutral grounded power system, a maximum-entropy learning algorithm based on radial basis function neural networks is used to control the chaotic system. The algorithm optimizes the object function to derive learning rule of central vectors, and uses the clustering function of network hidden layers. It improves the regression and learning ability of neural networks. The numerical experiment of ferroresonance system testifies the effectiveness and feasibility of using the algorithm to control chaos in neutral grounded system

  8. An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium

    Science.gov (United States)

    Palmer, Grant

    1987-01-01

    An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.

  9. Lyapunov, singular and bred vectors in a multi-scale system: an empirical exploration of vectors related to instabilities

    International Nuclear Information System (INIS)

    Norwood, Adrienne; Kalnay, Eugenia; Ide, Kayo; Yang, Shu-Chih; Wolfe, Christopher

    2013-01-01

    We compute and compare the three types of vectors frequently used to explore the instability properties of dynamical models, namely Lyapunov vectors (LVs), singular vectors (SVs) and bred vectors (BVs) in two systems, using the Wolfe–Samelson (2007 Tellus A 59 355–66) algorithm to compute all of the Lyapunov vectors. The first system is the Lorenz (1963 J. Atmos. Sci. 20 130–41) three-variable model. Although the leading Lyapunov vector, LV1, grows fastest globally, the second Lyapunov vector, LV2, which has zero growth globally, often grows faster than LV1 locally. Whenever this happens, BVs grow closer to LV2, suggesting that in larger atmospheric or oceanic models where several instabilities can grow in different areas of the world, BVs will grow toward the fastest growing local unstable mode. A comparison of their growth rates at different times shows that all three types of dynamical vectors have the ability to predict regime changes and the duration of the new regime based on their growth rates in the last orbit of the old regime, as shown for BVs by Evans et al (2004 Bull. Am. Meteorol. Soc. 520–4). LV1 and BVs have similar predictive skill, LV2 has a tendency to produce false alarms, and even LV3 shows that maximum decay is also associated with regime change. Initial and final SVs grow much faster and are the most accurate predictors of regime change, although the characteristics of the initial SVs are strongly dependent on the length of the optimization window. The second system is the toy ‘ocean-atmosphere’ model developed by Peña and Kalnay (2004 Nonlinear Process. Geophys. 11 319–27) coupling three Lorenz (1963 J. Atmos. Sci. 20 130–41) systems with different time scales, in order to test the effects of fast and slow modes of growth on the dynamical vectors. A fast ‘extratropical atmosphere’ is weakly coupled to a fast ‘tropical atmosphere’ which is, in turn, strongly coupled to a slow ‘ocean’ system, the latter coupling

  10. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  11. A vectorization of the Jameson-Caughey NYU transonic swept-wing computer program FLO-22-V1 for the STAR-100 computer

    Science.gov (United States)

    Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.

    1978-01-01

    The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.

  12. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  13. pEPito: a significantly improved non-viral episomal expression vector for mammalian cells

    Directory of Open Access Journals (Sweden)

    Ogris Manfred

    2010-03-01

    Full Text Available Abstract Background The episomal replication of the prototype vector pEPI-1 depends on a transcription unit starting from the constitutively expressed Cytomegalovirus immediate early promoter (CMV-IEP and directed into a 2000 bp long matrix attachment region sequence (MARS derived from the human β-interferon gene. The original pEPI-1 vector contains two mammalian transcription units and a total of 305 CpG islands, which are located predominantly within the vector elements necessary for bacterial propagation and known to be counterproductive for persistent long-term transgene expression. Results Here, we report the development of a novel vector pEPito, which is derived from the pEPI-1 plasmid replicon but has considerably improved efficacy both in vitro and in vivo. The pEPito vector is significantly reduced in size, contains only one transcription unit and 60% less CpG motives in comparison to pEPI-1. It exhibits major advantages compared to the original pEPI-1 plasmid, including higher transgene expression levels and increased colony-forming efficiencies in vitro, as well as more persistent transgene expression profiles in vivo. The performance of pEPito-based vectors was further improved by replacing the CMV-IEP with the human CMV enhancer/human elongation factor 1 alpha promoter (hCMV/EF1P element that is known to be less affected by epigenetic silencing events. Conclusions The novel vector pEPito can be considered suitable as an improved vector for biotechnological applications in vitro and for non-viral gene delivery in vivo.

  14. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  15. Classification of Laser Induced Fluorescence Spectra from Normal and Malignant bladder tissues using Learning Vector Quantization Neural Network in Bladder Cancer Diagnosis

    DEFF Research Database (Denmark)

    Karemore, Gopal Raghunath; Mascarenhas, Kim Komal; Patil, Choudhary

    2008-01-01

    In the present work we discuss the potential of recently developed classification algorithm, Learning Vector Quantization (LVQ), for the analysis of Laser Induced Fluorescence (LIF) Spectra, recorded from normal and malignant bladder tissue samples. The algorithm is prototype based and inherently...

  16. Fuzzy Sarsa with Focussed Replacing Eligibility Traces for Robust and Accurate Control

    Science.gov (United States)

    Kamdem, Sylvain; Ohki, Hidehiro; Sueda, Naomichi

    Several methods of reinforcement learning in continuous state and action spaces that utilize fuzzy logic have been proposed in recent years. This paper introduces Fuzzy Sarsa(λ), an on-policy algorithm for fuzzy learning that relies on a novel way of computing replacing eligibility traces to accelerate the policy evaluation. It is tested against several temporal difference learning algorithms: Sarsa(λ), Fuzzy Q(λ), an earlier fuzzy version of Sarsa and an actor-critic algorithm. We perform detailed evaluations on two benchmark problems : a maze domain and the cart pole. Results of various tests highlight the strengths and weaknesses of these algorithms and show that Fuzzy Sarsa(λ) outperforms all other algorithms tested for a larger granularity of design and under noisy conditions. It is a highly competitive method of learning in realistic noisy domains where a denser fuzzy design over the state space is needed for a more precise control.

  17. Consumer visual appraisal and shelf life of leg chops from suckling kids raised with natural milk or milk replacer.

    Science.gov (United States)

    Ripoll, Guillermo; Alcalde, María J; Argüello, Anastasio; Córdoba, María G; Panea, Begoña

    2018-05-01

    The use of milk replacers to feed suckling kids could affect the shelf life and appearance of the meat. Leg chops were evaluated by consumers and the instrumental color was measured. A machine learning algorithm was used to relate them. The aim of this experiment was to study the shelf life of the meat of kids reared with dam's milk or milk replacers and to ascertain which illuminant and instrumental color variables are used by consumers as criteria to evaluate that visual appraisal. Meat from kids reared with milk replacers was more valuable and had a longer shelf life than meat from kids reared with natural milk. Consumers used the color of the whole surface of the leg chop to assess the appearance of meat. Lightness and hue angle were the prime cues used to evaluate the appearance of meat. Illuminant D65 was more useful for relating the visual appraisal with the instrumental color using a machine learning algorithm. The machine learning algorithms showed that the underlying rules used by consumers to evaluate the appearance of suckling kid meat are not at all linear and can be computationally schematized into a simple algorithm. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  18. Support vector machine for the diagnosis of malignant mesothelioma

    Science.gov (United States)

    Ushasukhanya, S.; Nithyakalyani, A.; Sivakumar, V.

    2018-04-01

    Harmful mesothelioma is an illness in which threatening (malignancy) cells shape in the covering of the trunk or stomach area. Being presented to asbestos can influence the danger of threatening mesothelioma. Signs and side effects of threatening mesothelioma incorporate shortness of breath and agony under the rib confine. Tests that inspect within the trunk and belly are utilized to recognize (find) and analyse harmful mesothelioma. Certain elements influence forecast (shot of recuperation) and treatment choices. In this review, Support vector machine (SVM) classifiers were utilized for Mesothelioma sickness conclusion. SVM output is contrasted by concentrating on Mesothelioma’s sickness and findings by utilizing similar information set. The support vector machine algorithm gives 92.5% precision acquired by means of 3-overlap cross-approval. The Mesothelioma illness dataset were taken from an organization reports from Turkey.

  19. Vector entropy imaging theory with application to computerized tomography

    International Nuclear Information System (INIS)

    Wang Yuanmei; Cheng Jianping; Heng, Pheng Ann

    2002-01-01

    Medical imaging theory for x-ray CT and PET is based on image reconstruction from projections. In this paper a novel vector entropy imaging theory under the framework of multiple criteria decision making is presented. We also study the most frequently used image reconstruction methods, namely, least square, maximum entropy, and filtered back-projection methods under the framework of the single performance criterion optimization. Finally, we introduce some of the results obtained by various reconstruction algorithms using computer-generated noisy projection data from the Hoffman phantom and real CT scanner data. Comparison of the reconstructed images indicates that the vector entropy method gives the best in error (difference between the original phantom data and reconstruction), smoothness (suppression of noise), grey value resolution and is free of ghost images. (author)

  20. Sustainability Evaluation of Power Grid Construction Projects Using Improved TOPSIS and Least Square Support Vector Machine with Modified Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2018-01-01

    Full Text Available The electric power industry is of great significance in promoting social and economic development and improving people’s living standards. Power grid construction is a necessary part of infrastructure construction, whose sustainability plays an important role in economic development, environmental protection and social progress. In order to effectively evaluate the sustainability of power grid construction projects, in this paper, we first identified 17 criteria from four dimensions including economy, technology, society and environment to establish the evaluation criteria system. After that, the grey incidence analysis was used to modify the traditional Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS, which made it possible to evaluate the sustainability of electric power construction projects based on visual angle of similarity and nearness. Then, in order to simplify the procedure of experts scoring and computation, on the basis of evaluation results of the improved TOPSIS, the model using Modified Fly Optimization Algorithm (MFOA to optimize the Least Square Support Vector Machine (LSSVM was established. Finally, a numerical example was given to demonstrate the effectiveness of the proposed model.

  1. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-01-01

    Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, a feed-forward neural network (FNN based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO algorithm and gravitational search algorithm (GSA is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.

  2. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  3. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  4. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  5. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  6. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  7. Basic Algorithms for the Asynchronous Reconfigurable Mesh

    Directory of Open Access Journals (Sweden)

    Yosi Ben-Asher

    2002-01-01

    Full Text Available Many constant time algorithms for various problems have been developed for the reconfigurable mesh (RM in the past decade. All these algorithms are designed to work with synchronous execution, with no regard for the fact that large size RMs will probably be asynchronous. A similar observation about the PRAM model motivated many researchers to develop algorithms and complexity measures for the asynchronous PRAM (APRAM. In this work, we show how to define the asynchronous reconfigurable mesh (ARM and how to measure the complexity of asynchronous algorithms executed on it. We show that connecting all processors in a row of an n×n ARM (the analog of barrier synchronization in the APRAM model can be solved with complexity Θ(nlog⁡n. Intuitively, this is average work time for solving such a problem. Next, we describe general a technique for simulating T -step synchronous RM algorithms on the ARM with complexity of Θ(T⋅n2log⁡n. Finally, we consider the simulation of the classical synchronous algorithm for counting the number of non-zero bits in an n bits vector using (kalgorithm being simulated, one can (at least in the case of counting improve upon the general simulation.

  8. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  9. The Efficient Use of Vector Computers with Emphasis on Computational Fluid Dynamics : a GAMM-Workshop

    CERN Document Server

    Gentzsch, Wolfgang

    1986-01-01

    The GAMM Committee for Numerical Methods in Fluid Mechanics organizes workshops which should bring together experts of a narrow field of computational fluid dynamics (CFD) to exchange ideas and experiences in order to speed-up the development in this field. In this sense it was suggested that a workshop should treat the solution of CFD problems on vector computers. Thus we organized a workshop with the title "The efficient use of vector computers with emphasis on computational fluid dynamics". The workshop took place at the Computing Centre of the University of Karlsruhe, March 13-15,1985. The participation had been restricted to 22 people of 7 countries. 18 papers have been presented. In the announcement of the workshop we wrote: "Fluid mechanics has actively stimulated the development of superfast vector computers like the CRAY's or CYBER 205. Now these computers on their turn stimulate the development of new algorithms which result in a high degree of vectorization (sca1ar/vectorized execution-time). But w...

  10. 3G vector-primer plasmid for constructing full-length-enriched cDNA libraries.

    Science.gov (United States)

    Zheng, Dong; Zhou, Yanna; Zhang, Zidong; Li, Zaiyu; Liu, Xuedong

    2008-09-01

    We designed a 3G vector-primer plasmid for the generation of full-length-enriched complementary DNA (cDNA) libraries. By employing the terminal transferase activity of reverse transcriptase and the modified strand replacement method, this plasmid (assembled with a polydT end and a deoxyguanosine [dG] end) combines priming full-length cDNA strand synthesis and directional cDNA cloning. As a result, the number of steps involved in cDNA library preparation is decreased while simplifying downstream gene manipulation, sequencing, and subcloning. The 3G vector-primer plasmid method yields fully represented plasmid primed libraries that are equivalent to those made by the SMART (switching mechanism at 5' end of RNA transcript) approach.

  11. Real-time perspective correction in video stream

    Directory of Open Access Journals (Sweden)

    Glagolev Vladislav

    2018-01-01

    Full Text Available The paper describes an algorithm used for software perspective correction. The algorithm uses the camera’s orientation angles and transforms the coordinates of pixels on a source image to coordinates on a virtual image form the camera whose focal plane is perpendicular to the gravity vector. This algorithm can be used as a low-cost replacement of a gyrostabilazer in specific applications that restrict using movable parts or heavy and pricey equipment.

  12. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  13. Application of ANN and fuzzy logic algorithms for streamflow ...

    Indian Academy of Sciences (India)

    The present study focusses on development of models using ANN and fuzzy logic (FL) algorithm for predicting the streamflow for catchment of Savitri River Basin. The input vector to these models were daily rainfall, mean daily evaporation, mean daily temperature and lag streamflow used. In the present study, 20 years ...

  14. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    Science.gov (United States)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  15. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  16. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  17. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.

    Science.gov (United States)

    Sørensen, Lauge; Nielsen, Mads

    2018-05-15

    The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Determination of foodborne pathogenic bacteria by multiplex PCR-microchip capillary electrophoresis with genetic algorithm-support vector regression optimization.

    Science.gov (United States)

    Li, Yongxin; Li, Yuanqian; Zheng, Bo; Qu, Lingli; Li, Can

    2009-06-08

    A rapid and sensitive method based on microchip capillary electrophoresis with condition optimization of genetic algorithm-support vector regression (GA-SVR) was developed and applied to simultaneous analysis of multiplex PCR products of four foodborne pathogenic bacteria. Four pairs of oligonucleotide primers were designed to exclusively amplify the targeted gene of Vibrio parahemolyticus, Salmonella, Escherichia coli (E. coli) O157:H7, Shigella and the quadruplex PCR parameters were optimized. At the same time, GA-SVR was employed to optimize the separation conditions of DNA fragments in microchip capillary electrophoresis. The proposed method was applied to simultaneously detect the multiplex PCR products of four foodborne pathogenic bacteria under the optimal conditions within 8 min. The levels of detection were as low as 1.2 x 10(2) CFU mL(-1) of Vibrio parahemolyticus, 2.9 x 10(2) CFU mL(-1) of Salmonella, 8.7 x 10(1) CFU mL(-1) of E. coli O157:H7 and 5.2 x 10(1) CFU mL(-1) of Shigella, respectively. The relative standard deviation of migration time was in the range of 0.74-2.09%. The results demonstrated that the good resolution and less analytical time were achieved due to the application of the multivariate strategy. This study offers an efficient alternative to routine foodborne pathogenic bacteria detection in a fast, reliable, and sensitive way.

  19. Inverse Modeling of Soil Hydraulic Parameters Based on a Hybrid of Vector-Evaluated Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yi-Bo Li

    2018-01-01

    Full Text Available The accurate estimation of soil hydraulic parameters (θs, α, n, and Ks of the van Genuchten–Mualem model has attracted considerable attention. In this study, we proposed a new two-step inversion method, which first estimated the hydraulic parameter θs using objective function by the final water content, and subsequently estimated the soil hydraulic parameters α, n, and Ks, using a vector-evaluated genetic algorithm and particle swarm optimization (VEGA-PSO method based on objective functions by cumulative infiltration and infiltration rate. The parameters were inversely estimated for four types of soils (sand, loam, silt, and clay under an in silico experiment simulating the tension disc infiltration at three initial water content levels. The results indicated that the method is excellent and robust. Because the objective function had multilocal minima in a tiny range near the true values, inverse estimation of the hydraulic parameters was difficult; however, the estimated soil water retention curves and hydraulic conductivity curves were nearly identical to the true curves. In addition, the proposed method was able to estimate the hydraulic parameters accurately despite substantial measurement errors in initial water content, final water content, and cumulative infiltration, proving that the method was feasible and practical for field application.

  20. Design optimization of tailor-rolled blank thin-walled structures based on ɛ-support vector regression technique and genetic algorithm

    Science.gov (United States)

    Duan, Libin; Xiao, Ning-cong; Li, Guangyao; Cheng, Aiguo; Chen, Tao

    2017-07-01

    Tailor-rolled blank thin-walled (TRB-TH) structures have become important vehicle components owing to their advantages of light weight and crashworthiness. The purpose of this article is to provide an efficient lightweight design for improving the energy-absorbing capability of TRB-TH structures under dynamic loading. A finite element (FE) model for TRB-TH structures is established and validated by performing a dynamic axial crash test. Different material properties for individual parts with different thicknesses are considered in the FE model. Then, a multi-objective crashworthiness design of the TRB-TH structure is constructed based on the ɛ-support vector regression (ɛ-SVR) technique and non-dominated sorting genetic algorithm-II. The key parameters (C, ɛ and σ) are optimized to further improve the predictive accuracy of ɛ-SVR under limited sample points. Finally, the technique for order preference by similarity to the ideal solution method is used to rank the solutions in Pareto-optimal frontiers and find the best compromise optima. The results demonstrate that the light weight and crashworthiness performance of the optimized TRB-TH structures are superior to their uniform thickness counterparts. The proposed approach provides useful guidance for designing TRB-TH energy absorbers for vehicle bodies.