#### Sample records for vector replacement algorithm

1. Vector Network Coding Algorithms

OpenAIRE

2010-01-01

We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

2. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

Science.gov (United States)

Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

2015-12-01

Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

3. Automated Vectorization of Decision-Based Algorithms

Science.gov (United States)

James, Mark

2006-01-01

Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

4. Flash-Aware Page Replacement Algorithm

Directory of Open Access Journals (Sweden)

Guangxia Xu

2014-01-01

Full Text Available Due to the limited main memory resource of consumer electronics equipped with NAND flash memory as storage device, an efficient page replacement algorithm called FAPRA is proposed for NAND flash memory in the light of its inherent characteristics. FAPRA introduces an efficient victim page selection scheme taking into account the benefit-to-cost ratio for evicting each victim page candidate and the combined recency and frequency value, as well as the erase count of the block to which each page belongs. Since the dirty victim page often contains clean data that exist in both the main memory and the NAND flash memory based storage device, FAPRA only writes the dirty data within the victim page back to the NAND flash memory based storage device in order to reduce the redundant write operations. We conduct a series of trace-driven simulations and experimental results show that our proposed FAPRA algorithm outperforms the state-of-the-art algorithms in terms of page hit ratio, the number of write operations, runtime, and the degree of wear leveling.

5. Gradient Evolution-based Support Vector Machine Algorithm for Classification

Science.gov (United States)

Zulvia, Ferani E.; Kuo, R. J.

2018-03-01

This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

6. Support vector machines optimization based theory, algorithms, and extensions

CERN Document Server

Deng, Naiyang; Zhang, Chunhua

2013-01-01

Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

7. Researches on Key Algorithms in Analogue Seismogram Records Vectorization

Directory of Open Access Journals (Sweden)

Maofa WANG

2014-09-01

Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In our study, a new tracing algorithm for simulated seismogram curves based on visual filed feature is presented. We also give out the technological process to vectorizing simulated seismograms, and an analog seismic record vectorization system has been accomplished independently. Using it, we can precisely and speedy vectorize analog seismic records (need professionals to participate interactively.

8. Vectorization of linear discrete filtering algorithms

Science.gov (United States)

Schiess, J. R.

1977-01-01

Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.

9. An efficient parallel algorithm for matrix-vector multiplication

Energy Technology Data Exchange (ETDEWEB)

Hendrickson, B.; Leland, R.; Plimpton, S.

1993-03-01

The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

10. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

Directory of Open Access Journals (Sweden)

Zhongyi Hu

2013-01-01

Full Text Available Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA based memetic algorithm (FA-MA to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

11. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

International Nuclear Information System (INIS)

Heifetz, D.B.

1987-04-01

Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

12. DC Algorithm for Extended Robust Support Vector Machine.

Science.gov (United States)

Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

2017-05-01

Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

13. Face recognition algorithm using extended vector quantization histogram features.

Science.gov (United States)

Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

2018-01-01

In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

14. Fast vector quantization using a Bat algorithm for image compression

Directory of Open Access Journals (Sweden)

Chiranjeevi Karri

2016-06-01

Full Text Available Linde–Buzo–Gray (LBG, a traditional method of vector quantization (VQ generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO and Firefly algorithm (FA generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brighter fireflies in the search space respectively. In this paper, we propose a new algorithm called BA-LBG which uses Bat Algorithm on initial solution of LBG. It produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats. From the results, we observed that BA-LBG has high PSNR compared to LBG, PSO-LBG, Quantum PSO-LBG, HBMO-LBG and FA-LBG, and its average convergence speed is 1.841 times faster than HBMO-LBG and FA-LBG but no significance difference with PSO.

15. A Semisupervised Support Vector Machines Algorithm for BCI Systems

Science.gov (United States)

Qin, Jianzhao; Li, Yuanqing; Sun, Wei

2007-01-01

As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

16. Parallel-Vector Algorithm For Rapid Structural Anlysis

Science.gov (United States)

Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

1993-01-01

New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

17. Single Directional SMO Algorithm for Least Squares Support Vector Machines

Directory of Open Access Journals (Sweden)

Xigao Shao

2013-01-01

Full Text Available Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs. In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO- type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

18. Vectorization of a penalty function algorithm for well scheduling

Science.gov (United States)

Absar, I.

1984-01-01

In petroleum engineering, the oil production profiles of a reservoir can be simulated by using a finite gridded model. This profile is affected by the number and choice of wells which in turn is a result of various production limits and constraints including, for example, the economic minimum well spacing, the number of drilling rigs available and the time required to drill and complete a well. After a well is available it may be shut in because of excessive water or gas productions. In order to optimize the field performance a penalty function algorithm was developed for scheduling wells. For an example with some 343 wells and 15 different constraints, the scheduling routine vectorized for the CYBER 205 averaged 560 times faster performance than the scalar version.

19. Effective data compaction algorithm for vector scan EB writing system

Science.gov (United States)

Ueki, Shinichi; Ashida, Isao; Kawahira, Hiroichi

2001-01-01

We have developed a new mask data compaction algorithm dedicated to vector scan electron beam (EB) writing systems for 0.13 μm device generation. Large mask data size has become a significant problem at mask data processing for which data compaction is an important technique. In our new mask data compaction, 'array' representation and 'cell' representation are used. The mask data format for the EB writing system with vector scan supports these representations. The array representation has a pitch and a number of repetitions in both X and Y direction. The cell representation has a definition of figure group and its reference. The new data compaction method has the following three steps. (1) Search arrays of figures by selecting pitches of array so that a number of figures are included. (2) Find out same arrays that have same repetitive pitch and number of figures. (3) Search cells of figures, where the figures in each cell take identical positional relationship. By this new method for the mask data of a 4M-DRAM block gate layer with peripheral circuits, 202 Mbytes without compaction was highly compacted to 6.7 Mbytes in 20 minutes on a 500 MHz PC.

20. Efficient four fragment cloning for the construction of vectors for targeted gene replacement in filamentous fungi

DEFF Research Database (Denmark)

Frandsen, Rasmus John Normand; Andersson, Jens A.; Kristensen, Matilde Bylov

2008-01-01

Background: The rapid increase in whole genome fungal sequence information allows large scale functional analyses of target genes. Efficient transformation methods to obtain site-directed gene replacement, targeted over-expression by promoter replacement, in-frame epitope tagging or fusion...... of coding sequences with fluorescent markers such as GFP are essential for this process. Construction of vectors for these experiments depends on the directional cloning of two homologous recombination sequences on each side of a selection marker gene. Results: Here, we present a USER Friendly cloning based...

1. Global and Local Page Replacement Algorithms on Virtual Memory Systems for Image Processing

OpenAIRE

1985-01-01

Three virtual memory systems for image processing, different one another in frame allocation algorithms and page replacement algorithms, were examined experimentally upon their page-fault characteristics. The hypothesis, that global page replacement algorithms are susceptible to thrashing, held in the raster scan experiment, while it did not in another non raster-scan experiment. The results of the experiments may be useful also in making parallel image processors more efficient, while they a...

2. Support vector machines and evolutionary algorithms for classification single or together?

CERN Document Server

Stoean, Catalin

2014-01-01

When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this ‘masked hero’ be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.

3. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

Science.gov (United States)

Luo, Liyan; Xu, Luping; Zhang, Hua

2015-07-07

In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

4. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

Science.gov (United States)

Jiexian, Zeng; Xiupeng, Liu

2014-01-01

Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

5. Support Vector Regression and Genetic Algorithm for HVAC Optimal Operation

Directory of Open Access Journals (Sweden)

Ching-Wei Chen

2016-01-01

Full Text Available This study covers records of various parameters affecting the power consumption of air-conditioning systems. Using the Support Vector Machine (SVM, the chiller power consumption model, secondary chilled water pump power consumption model, air handling unit fan power consumption model, and air handling unit load model were established. In addition, it was found that R2 of the models all reached 0.998, and the training time was far shorter than that of the neural network. Through genetic programming, a combination of operating parameters with the least power consumption of air conditioning operation was searched. Moreover, the air handling unit load in line with the air conditioning cooling load was predicted. The experimental results show that for the combination of operating parameters with the least power consumption in line with the cooling load obtained through genetic algorithm search, the power consumption of the air conditioning systems under said combination of operating parameters was reduced by 22% compared to the fixed operating parameters, thus indicating significant energy efficiency.

6. ALGORITHM OF SAR SATELLITE ATTITUDE MEASUREMENT USING GPS AIDED BY KINEMATIC VECTOR

Institute of Scientific and Technical Information of China (English)

2007-01-01

In this paper, in order to improve the accuracy of the Synthetic Aperture Radar (SAR)satellite attitude using Global Positioning System (GPS) wide-band carrier phase, the SAR satellite attitude kinematic vector and Kalman filter are introduced. Introducing the state variable function of GPS attitude determination algorithm in SAR satellite by means of kinematic vector and describing the observation function by the GPS wide-band carrier phase, the paper uses the Kalman filter algorithm to obtian the attitude variables of SAR satellite. Compared the simulation results of Kalman filter algorithm with the least square algorithm and explicit solution, it is indicated that the Kalman filter algorithm is the best.

7. Parallel/vector algorithms for the spherical SN transport theory method

International Nuclear Information System (INIS)

Haghighat, A.; Mattis, R.E.

1990-01-01

This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

8. Analysis of human protein replacement stable cell lines established using snoMEN-PR vector.

Directory of Open Access Journals (Sweden)

Motoharu Ono

Full Text Available The study of the function of many human proteins is often hampered by technical limitations, such as cytotoxicity and phenotypes that result from overexpression of the protein of interest together with the endogenous version. Here we present the snoMEN (snoRNA Modulator of gene ExpressioN vector technology for generating stable cell lines where expression of the endogenous protein can be reduced and replaced by an exogenous protein, such as a fluorescent protein (FP-tagged version. SnoMEN are snoRNAs engineered to contain complementary sequences that can promote knock-down of targeted RNAs. We have established and characterised two such partial protein replacement human cell lines (snoMEN-PR. Quantitative mass spectrometry was used to analyse the specificity of knock-down and replacement at the protein level and also showed an increased pull-down efficiency of protein complexes containing exogenous, tagged proteins in the protein replacement cell lines, as compared with conventional co-expression strategies. The snoMEN approach facilitates the study of mammalian proteins, particularly those that have so far been difficult to investigate by exogenous expression and has wide applications in basic and applied gene-expression research.

9. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

Directory of Open Access Journals (Sweden)

Xuyun FU

2018-01-01

Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

10. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

KAUST Repository

Buse, Gerrit; Pfluger, Dirk; Murarasu, Alin; Jacob, Riko

2012-01-01

performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations

11. Some Algorithms for the Conditional Mean Vector and Covariance Matrix

Directory of Open Access Journals (Sweden)

John F. Monahan

2006-08-01

Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.

12. A New Waveform Mosaic Algorithm in the Vectorization of Paper Seismograms

Directory of Open Access Journals (Sweden)

Maofa Wang

2014-11-01

Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In this paper, a new waveform mosaic algorithm in the vectorization of paper seismograms is presented. We also give out the technological process to waveform mosaic, and a waveform mosaic system used to vectorize analog seismic record has been accomplished independently. Using it, we can precisely and speedy accomplish waveform mosaic for vectorizing analog seismic records.

13. A Modified Method Combined with a Support Vector Machine and Bayesian Algorithms in Biological Information

Directory of Open Access Journals (Sweden)

Wen-Gang Zhou

2015-06-01

Full Text Available With the deep research of genomics and proteomics, the number of new protein sequences has expanded rapidly. With the obvious shortcomings of high cost and low efficiency of the traditional experimental method, the calculation method for protein localization prediction has attracted a lot of attention due to its convenience and low cost. In the machine learning techniques, neural network and support vector machine (SVM are often used as learning tools. Due to its complete theoretical framework, SVM has been widely applied. In this paper, we make an improvement on the existing machine learning algorithm of the support vector machine algorithm, and a new improved algorithm has been developed, combined with Bayesian algorithms. The proposed algorithm can improve calculation efficiency, and defects of the original algorithm are eliminated. According to the verification, the method has proved to be valid. At the same time, it can reduce calculation time and improve prediction efficiency.

14. Solution of single linear tridiagonal systems and vectorization of the ICCG algorithm on the Cray 1

International Nuclear Information System (INIS)

Kershaw, D.S.

1981-01-01

The numerical algorithms used to solve the physics equation in codes which model laser fusion are examined, it is found that a large number of subroutines require the solution of tridiagonal linear systems of equations. One dimensional radiation transport, thermal and suprathermal electron transport, ion thermal conduction, charged particle and neutron transport, all require the solution of tridiagonal systems of equations. The standard algorithm that has been used in the past on CDC 7600's will not vectorize and so cannot take advantage of the large speed increases possible on the Cray-1 through vectorization. There is however, an alternate algorithm for solving tridiagonal systems, called cyclic reduction, which allows for vectorization, and which is optimal for the Cray-1. Software based on this algorithm is now being used in LASNEX to solve tridiagonal linear systems in the subroutines mentioned above. The new algorithm runs as much as five times faster than the standard algorithm on the Cray-1. The ICCG method is being used to solve the diffusion equation with a nine-point coupling scheme on the CDC 7600. In going from the CDC 7600 to the Cray-1, a large part of the algorithm consists of solving tridiagonal linear systems on each L line of the Lagrangian mesh in a manner which is not vectorizable. An alternate ICCG algorithm for the Cray-1 was developed which utilizes a block form of the cyclic reduction algorithm. This new algorithm allows full vectorization and runs as much as five times faster than the old algorithm on the Cray-1. It is now being used in Cray LASNEX to solve the two-dimensional diffusion equation in all the physics subroutines mentioned above

15. Genetic stability of gene targeted immunoglobulin loci. I. Heavy chain isotype exchange induced by a universal gene replacement vector.

Science.gov (United States)

Kardinal, C; Selmayr, M; Mocikat, R

1996-11-01

Gene targeting at the immunoglobulin loci of B cells is an efficient tool for studying immunoglobulin expression or generating chimeric antibodies. We have shown that vector integration induced by human immunoglobulin G1 (IgG1) insertion vectors results in subsequent vector excision mediated by the duplicated target sequence, whereas replacement events which could be induced by the same constructs remain stable. We could demonstrate that the distribution of the vector homology strongly influences the genetic stability obtained. To this end we developed a novel type of a heavy chain replacement vector making use of the heavy chain class switch recombination sequence. Despite the presence of a two-sided homology this construct is universally applicable irrespective of the constant gene region utilized by the B cell. In comparison to an integration vector the frequency of stable incorporation was strongly increased, but we still observed vector excision, although at a markedly reduced rate. The latter events even occurred with circular constructs. Linearization of the construct at various sites and the comparison with an integration vector that carries the identical homology sequence, but differs in the distribution of homology, revealed the following features of homologous recombination of immunoglobulin genes: (i) the integration frequency is only determined by the length of the homology flank where the cross-over takes place; (ii) a 5' flank that does not meet the minimum requirement of homology length cannot be complemented by a sufficient 3' flank; (iii) free vector ends play a role for integration as well as for replacement targeting; (iv) truncating recombination events are suppressed in the presence of two flanks. Furthermore, we show that the switch region that was used as 3' flank is non-functional in an inverted orientation.

16. A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms

Directory of Open Access Journals (Sweden)

Maofa Wang

2014-02-01

Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.

17. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

Directory of Open Access Journals (Sweden)

Shengliang Zong

2017-01-01

Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

18. Exploiting Support Vector Machine Algorithm to Break the Secret Key

Directory of Open Access Journals (Sweden)

S. Hou

2018-04-01

Full Text Available Template attacks (TA and support vector machine (SVM are two effective methods in side channel attacks (SCAs. Almost all studies on SVM in SCAs assume the required power traces are sufficient, which also implies the number of profiling traces belonging to each class is equivalent. Indeed, in the real attack scenario, there may not be enough power traces due to various restrictions. More specifically, the Hamming Weight of the S-Box output results in 9 binomial distributed classes, which significantly reduces the performance of SVM compared with the uniformly distributed classes. In this paper, the impact of the distribution of profiling traces on the performance of SVM is first explored in detail. And also, we conduct Synthetic Minority Oversampling TEchnique (SMOTE to solve the problem caused by the binomial distributed classes. By using SMOTE, the success rate of SVM is improved in the testing phase, and SVM requires fewer power traces to recover the key. Besides, TA is selected as a comparison. In contrast to what is perceived as common knowledge in unrestricted scenarios, our results indicate that SVM with proper parameters can significantly outperform TA.

19. A Semi-Vectorization Algorithm to Synthesis of Gravitational Anomaly Quantities on the Earth

Science.gov (United States)

Abdollahzadeh, M.; Eshagh, M.; Najafi Alamdari, M.

2009-04-01

The Earth's gravitational potential can be expressed by the well-known spherical harmonic expansion. The computational time of summing up this expansion is an important practical issue which can be reduced by an efficient numerical algorithm. This paper proposes such a method for block-wise synthesizing the anomaly quantities on the Earth surface using vectorization. Fully-vectorization means transformation of the summations to the simple matrix and vector products. It is not a practical for the matrices with large dimensions. Here a semi-vectorization algorithm is proposed to avoid working with large vectors and matrices. It speeds up the computations by using one loop for the summation either on degrees or on orders. The former is a good option to synthesize the anomaly quantities on the Earth surface considering a digital elevation model (DEM). This approach is more efficient than the two-step method which computes the quantities on the reference ellipsoid and continues them upward to the Earth surface. The algorithm has been coded in MATLAB which synthesizes a global grid of 5â²Ã- 5â² (corresponding 9 million points) of gravity anomaly or geoid height using a geopotential model to degree 360 in 10000 seconds by an ordinary computer with 2G RAM.

20. A reduce and replace strategy for suppressing vector-borne diseases: insights from a stochastic, spatial model.

Directory of Open Access Journals (Sweden)

Kenichi W Okamoto

Full Text Available Two basic strategies have been proposed for using transgenic Aedes aegypti mosquitoes to decrease dengue virus transmission: population reduction and population replacement. Here we model releases of a strain of Ae. aegypti carrying both a gene causing conditional adult female mortality and a gene blocking virus transmission into a wild population to assess whether such releases could reduce the number of competent vectors. We find this "reduce and replace" strategy can decrease the frequency of competent vectors below 50% two years after releases end. Therefore, this combined approach appears preferable to releasing a strain carrying only a female-killing gene, which is likely to merely result in temporary population suppression. However, the fixation of anti-pathogen genes in the population is unlikely. Genetic drift at small population sizes and the spatially heterogeneous nature of the population recovery after releases end prevent complete replacement of the competent vector population. Furthermore, releasing more individuals can be counter-productive in the face of immigration by wild-type mosquitoes, as greater population reduction amplifies the impact wild-type migrants have on the long-term frequency of the anti-pathogen gene. We expect the results presented here to give pause to expectations for driving an anti-pathogen construct to fixation by relying on releasing individuals carrying this two-gene construct. Nevertheless, in some dengue-endemic environments, a spatially heterogeneous decrease in competent vectors may still facilitate decreasing disease incidence.

1. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

Science.gov (United States)

Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

2016-11-23

The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

2. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

Energy Technology Data Exchange (ETDEWEB)

He, Hongxing; Fang, Hengrui [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States); Miller, Mitchell D. [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Phillips, George N. Jr [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Department of Biochemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Su, Wu-Pei, E-mail: wpsu@uh.edu [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States)

2016-07-15

An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.

3. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

Directory of Open Access Journals (Sweden)

Jiří Fejfar

2012-01-01

Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

4. Sub-Circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems

Science.gov (United States)

2008-12-16

of Defense, or the United States Government . AFIT/GCO/ENG/09-02 Sub-circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems... unicorns and random programs”. Communications and Computer Networks, 24–30. 2005. 87 Vita Eric D. Simonaire graduated from Granite Baptist Church School in...Service to attend the Air Force Institute of Technol- ogy in 2007. Upon graduation, he will serve the federal government in an Information Assurance

5. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

OpenAIRE

Zong, Shengliang; Chai, Guorong; Su, Yana

2017-01-01

We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

6. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

Energy Technology Data Exchange (ETDEWEB)

1992-04-11

This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

7. Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model

International Nuclear Information System (INIS)

Hong, W.-C.

2009-01-01

Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)

8. A Novel Integrated Algorithm for Wind Vector Retrieval from Conically Scanning Scatterometers

Directory of Open Access Journals (Sweden)

Xuetong Xie

2013-11-01

Full Text Available Due to the lower efficiency and the larger wind direction error of traditional algorithms, a novel integrated wind retrieval algorithm is proposed for conically scanning scatterometers. The proposed algorithm has the dual advantages of less computational cost and higher wind direction retrieval accuracy by integrating the wind speed standard deviation (WSSD algorithm and the wind direction interval retrieval (DIR algorithm. It adopts wind speed standard deviation as a criterion for searching possible wind vector solutions and retrieving a potential wind direction interval based on the change rate of the wind speed standard deviation. Moreover, a modified three-step ambiguity removal method is designed to let more wind directions be selected in the process of nudging and filtering. The performance of the new algorithm is illustrated by retrieval experiments using 300 orbits of SeaWinds/QuikSCAT L2A data (backscatter coefficients at 25 km resolution and co-located buoy data. Experimental results indicate that the new algorithm can evidently enhance the wind direction retrieval accuracy, especially in the nadir region. In comparison with the SeaWinds L2B Version 2 25 km selected wind product (retrieved wind fields, an improvement of 5.1° in wind direction retrieval can be made by the new algorithm for that region.

9. Screw Remaining Life Prediction Based on Quantum Genetic Algorithm and Support Vector Machine

Directory of Open Access Journals (Sweden)

Xiaochen Zhang

2017-01-01

Full Text Available To predict the remaining life of ball screw, a screw remaining life prediction method based on quantum genetic algorithm (QGA and support vector machine (SVM is proposed. A screw accelerated test bench is introduced. Accelerometers are installed to monitor the performance degradation of ball screw. Combined with wavelet packet decomposition and isometric mapping (Isomap, the sensitive feature vectors are obtained and stored in database. Meanwhile, the sensitive feature vectors are randomly chosen from the database and constitute training samples and testing samples. Then the optimal kernel function parameter and penalty factor of SVM are searched with the method of QGA. Finally, the training samples are used to train optimized SVM while testing samples are adopted to test the prediction accuracy of the trained SVM so the screw remaining life prediction model can be got. The experiment results show that the screw remaining life prediction model could effectively predict screw remaining life.

10. Cost Forecasting of Substation Projects Based on Cuckoo Search Algorithm and Support Vector Machines

Directory of Open Access Journals (Sweden)

Dongxiao Niu

2018-01-01

Full Text Available Accurate prediction of substation project cost is helpful to improve the investment management and sustainability. It is also directly related to the economy of substation project. Ensemble Empirical Mode Decomposition (EEMD can decompose variables with non-stationary sequence signals into significant regularity and periodicity, which is helpful in improving the accuracy of prediction model. Adding the Gauss perturbation to the traditional Cuckoo Search (CS algorithm can improve the searching vigor and precision of CS algorithm. Thus, the parameters and kernel functions of Support Vector Machines (SVM model are optimized. By comparing the prediction results with other models, this model has higher prediction accuracy.

11. Applications of the Chaotic Quantum Genetic Algorithm with Support Vector Regression in Load Forecasting

Directory of Open Access Journals (Sweden)

Cheng-Wen Lee

2017-11-01

Full Text Available Accurate electricity forecasting is still the critical issue in many energy management fields. The applications of hybrid novel algorithms with support vector regression (SVR models to overcome the premature convergence problem and improve forecasting accuracy levels also deserve to be widely explored. This paper applies chaotic function and quantum computing concepts to address the embedded drawbacks including crossover and mutation operations of genetic algorithms. Then, this paper proposes a novel electricity load forecasting model by hybridizing chaotic function and quantum computing with GA in an SVR model (named SVRCQGA to achieve more satisfactory forecasting accuracy levels. Experimental examples demonstrate that the proposed SVRCQGA model is superior to other competitive models.

12. Thermodynamic analysis of refrigerant mixtures for possible replacements for CFCs by an algorithm compiling property data

International Nuclear Information System (INIS)

Arcaklioglu, Erol; Cavusoglu, Abdullah; Erisen, Ali

2006-01-01

In this study, we formed an algorithm to find refrigerant mixtures of equal volumetric cooling capacity (VCC) when compared to CFC based refrigerants in vapor compression refrigeration systems. To achieve this aim the point properties of the refrigerants are obtained from REFPROP where appropriate. We used replacement mixture ratios-of varying mass percentages-suggested by various authors along with our newly formed mixture ratios. In other words, we tried to see the effect of changing mass percentages of the suggested (i.e. in the literature) replacement refrigerants on the VCC of the cooling system. Secondly, we used this algorithm to calculate the coefficient of performance (COP) of the same refrigeration system. This mechanism has provided us the ability to compare the COP of the suggested refrigerant mixtures and our newly formed mixture ratios with the conventional CFC based ones. According to our results, for R12 R290/R600a (56/44) mixture, for R22 R32/R125/R134a (32.5/5/62.5) mixture, and for R502 R32/R125/R134a (43/5/52) mixture are appropriate and can be used as replacements

13. Replacement

Directory of Open Access Journals (Sweden)

2014-03-01

Full Text Available The fishmeal replaced with Spirulina platensis, Chlorella vulgaris and Azolla pinnata and the formulated diet fed to Macrobrachium rosenbergii postlarvae to assess the enhancement ability of non-enzymatic antioxidants (vitamin C and E, enzymatic antioxidants (superoxide dismutase (SOD and catalase (CAT and lipid peroxidation (LPx were analysed. In the present study, the S. platensis, C. vulgaris and A. pinnata inclusion diet fed groups had significant (P < 0.05 improvement in the levels of vitamins C and E in the hepatopancreas and muscle tissue. Among all the diets, the replacement materials in 50% incorporated feed fed groups showed better performance when compared with the control group in non-enzymatic antioxidant activity. The 50% fishmeal replacement (best performance diet fed groups taken for enzymatic antioxidant study, in SOD, CAT and LPx showed no significant increases when compared with the control group. Hence, the present results revealed that the formulated feed enhanced the vitamins C and E, the result of decreased level of enzymatic antioxidants (SOD, CAT and LPx revealed that these feeds are non-toxic and do not produce any stress to postlarvae. These ingredients can be used as an alternative protein source for sustainable Macrobrachium culture.

14. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

Directory of Open Access Journals (Sweden)

Guo Bao-long

2004-09-01

Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

15. Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem

International Nuclear Information System (INIS)

Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.

2013-01-01

Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER GPU code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)

16. Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem

Energy Technology Data Exchange (ETDEWEB)

Du, X.; Liu, T.; Ji, W.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Brown, F. B. [Monte Carlo Codes Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2013-07-01

Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)

17. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

Science.gov (United States)

Bai, Chen; Han, Dongjuan

2018-04-01

MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

18. Vectors

DEFF Research Database (Denmark)

Boeriis, Morten; van Leeuwen, Theo

2017-01-01

should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

19. Electric load forecasting by seasonal recurrent SVR (support vector regression) with chaotic artificial bee colony algorithm

International Nuclear Information System (INIS)

Hong, Wei-Chiang

2011-01-01

Support vector regression (SVR), with hybrid chaotic sequence and evolutionary algorithms to determine suitable values of its three parameters, not only can effectively avoid converging prematurely (i.e., trapping into a local optimum), but also reveals its superior forecasting performance. Electric load sometimes demonstrates a seasonal (cyclic) tendency due to economic activities or climate cyclic nature. The applications of SVR models to deal with seasonal (cyclic) electric load forecasting have not been widely explored. In addition, the concept of recurrent neural networks (RNNs), focused on using past information to capture detailed information, is helpful to be combined into an SVR model. This investigation presents an electric load forecasting model which combines the seasonal recurrent support vector regression model with chaotic artificial bee colony algorithm (namely SRSVRCABC) to improve the forecasting performance. The proposed SRSVRCABC employs the chaotic behavior of honey bees which is with better performance in function optimization to overcome premature local optimum. A numerical example from an existed reference is used to elucidate the forecasting performance of the proposed SRSVRCABC model. The forecasting results indicate that the proposed model yields more accurate forecasting results than ARIMA and TF-ε-SVR-SA models. Therefore, the SRSVRCABC model is a promising alternative for electric load forecasting. -- Highlights: → Hybridizing the seasonal adjustment and the recurrent mechanism into an SVR model. → Employing chaotic sequence to improve the premature convergence of artificial bee colony algorithm. → Successfully providing significant accurate monthly load demand forecasting.

20. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

Energy Technology Data Exchange (ETDEWEB)

Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

1997-12-01

In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

1. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

Science.gov (United States)

Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

2017-12-01

Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

2. Short-Term Wind Speed Forecasting Using Support Vector Regression Optimized by Cuckoo Optimization Algorithm

Directory of Open Access Journals (Sweden)

Jianzhou Wang

2015-01-01

Full Text Available This paper develops an effectively intelligent model to forecast short-term wind speed series. A hybrid forecasting technique is proposed based on recurrence plot (RP and optimized support vector regression (SVR. Wind caused by the interaction of meteorological systems makes itself extremely unsteady and difficult to forecast. To understand the wind system, the wind speed series is analyzed using RP. Then, the SVR model is employed to forecast wind speed, in which the input variables are selected by RP, and two crucial parameters, including the penalties factor and gamma of the kernel function RBF, are optimized by various optimization algorithms. Those optimized algorithms are genetic algorithm (GA, particle swarm optimization algorithm (PSO, and cuckoo optimization algorithm (COA. Finally, the optimized SVR models, including COA-SVR, PSO-SVR, and GA-SVR, are evaluated based on some criteria and a hypothesis test. The experimental results show that (1 analysis of RP reveals that wind speed has short-term predictability on a short-term time scale, (2 the performance of the COA-SVR model is superior to that of the PSO-SVR and GA-SVR methods, especially for the jumping samplings, and (3 the COA-SVR method is statistically robust in multi-step-ahead prediction and can be applied to practical wind farm applications.

3. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

Science.gov (United States)

Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

1988-01-01

A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

4. A fingerprint key binding algorithm based on vector quantization and error correction

Science.gov (United States)

Li, Liang; Wang, Qian; Lv, Ke; He, Ning

2012-04-01

In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

5. Pair- ${v}$ -SVR: A Novel and Efficient Pairing nu-Support Vector Regression Algorithm.

Science.gov (United States)

Hao, Pei-Yi

This paper proposes a novel and efficient pairing nu-support vector regression (pair--SVR) algorithm that combines successfully the superior advantages of twin support vector regression (TSVR) and classical -SVR algorithms. In spirit of TSVR, the proposed pair--SVR solves two quadratic programming problems (QPPs) of smaller size rather than a single larger QPP, and thus has faster learning speed than classical -SVR. The significant advantage of our pair--SVR over TSVR is the improvement in the prediction speed and generalization ability by introducing the concepts of the insensitive zone and the regularization term that embodies the essence of statistical learning theory. Moreover, pair--SVR has additional advantage of using parameter for controlling the bounds on fractions of SVs and errors. Furthermore, the upper bound and lower bound functions of the regression model estimated by pair--SVR capture well the characteristics of data distributions, thus facilitating automatic estimation of the conditional mean and predictive variance simultaneously. This may be useful in many cases, especially when the noise is heteroscedastic and depends strongly on the input values. The experimental results validate the superiority of our pair--SVR in both training/prediction speed and generalization ability.This paper proposes a novel and efficient pairing nu-support vector regression (pair--SVR) algorithm that combines successfully the superior advantages of twin support vector regression (TSVR) and classical -SVR algorithms. In spirit of TSVR, the proposed pair--SVR solves two quadratic programming problems (QPPs) of smaller size rather than a single larger QPP, and thus has faster learning speed than classical -SVR. The significant advantage of our pair--SVR over TSVR is the improvement in the prediction speed and generalization ability by introducing the concepts of the insensitive zone and the regularization term that embodies the essence of statistical learning theory

6. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

Directory of Open Access Journals (Sweden)

Yukai Yao

2015-01-01

Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

7. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

Science.gov (United States)

Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

2017-02-01

We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

8. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

Energy Technology Data Exchange (ETDEWEB)

Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

2017-02-01

We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

9. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

International Nuclear Information System (INIS)

Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

2017-01-01

We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

10. Replacement method and enhanced replacement method versus the genetic algorithm approach for the selection of molecular descriptors in QSPR/QSAR theories.

Science.gov (United States)

Mercader, Andrew G; Duchowicz, Pablo R; Fernández, Francisco M; Castro, Eduardo A

2010-09-27

We compare three methods for the selection of optimal subsets of molecular descriptors from a much greater pool of such regression variables. On the one hand is our enhanced replacement method (ERM) and on the other is the simpler replacement method (RM) and the genetic algorithm (GA). These methods avoid the impracticable full search for optimal variables in large sets of molecular descriptors. Present results for 10 different experimental databases suggest that the ERM is clearly preferable to the GA that is slightly better than the RM. However, the latter approach requires the smallest amount of linear regressions and, consequently, the lowest computation time.

11. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

KAUST Repository

Buse, Gerrit

2012-06-01

The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

12. Evaluation of Chinese Calligraphy by Using DBSC Vectorization and ICP Algorithm

Directory of Open Access Journals (Sweden)

Mengdi Wang

2016-01-01

Full Text Available Chinese calligraphy is a charismatic ancient art form with high artistic value in Chinese culture. Virtual calligraphy learning system is a research hotspot in recent years. In such system, a judging mechanism for user’s practice result is quite important. Sometimes, user’s handwritten character is not that standard, the size and position are not fixed, and the whole character may be even askew, which brings difficulty for its evaluation. In this paper, we propose an approach by using DBSCs (disk B-spline curves vectorization and ICP (iterative closest point algorithm, which cannot only evaluate a calligraphic character without knowing what it is, but also deal with the above problems commendably. Firstly we find the promising candidate characters from the database according to the angular difference relations as quickly as possible. Then we check these vectorized candidates by using ICP algorithm based upon the skeleton, hence finding out the best matching character. Finally a comprehensive evaluation involving global (the whole character and local (strokes similarities is implemented, and a final composited evaluation score can be worked out.

13. On efficient randomized algorithms for finding the PageRank vector

Science.gov (United States)

Gasnikov, A. V.; Dmitriev, D. Yu.

2015-03-01

Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

14. The Short-Term Power Load Forecasting Based on Sperm Whale Algorithm and Wavelet Least Square Support Vector Machine with DWT-IR for Feature Selection

Directory of Open Access Journals (Sweden)

Jin-peng Liu

2017-07-01

Full Text Available Short-term power load forecasting is an important basis for the operation of integrated energy system, and the accuracy of load forecasting directly affects the economy of system operation. To improve the forecasting accuracy, this paper proposes a load forecasting system based on wavelet least square support vector machine and sperm whale algorithm. Firstly, the methods of discrete wavelet transform and inconsistency rate model (DWT-IR are used to select the optimal features, which aims to reduce the redundancy of input vectors. Secondly, the kernel function of least square support vector machine LSSVM is replaced by wavelet kernel function for improving the nonlinear mapping ability of LSSVM. Lastly, the parameters of W-LSSVM are optimized by sperm whale algorithm, and the short-term load forecasting method of W-LSSVM-SWA is established. Additionally, the example verification results show that the proposed model outperforms other alternative methods and has a strong effectiveness and feasibility in short-term power load forecasting.

15. GPR identification of voids inside concrete based on the support vector machine algorithm

International Nuclear Information System (INIS)

Xie, Xiongyao; Li, Pan; Qin, Hui; Liu, Lanbo; Nobes, David C

2013-01-01

Voids inside reinforced concrete, which affect structural safety, are identified from ground penetrating radar (GPR) images using a completely automatic method based on the support vector machine (SVM) algorithm. The entire process can be characterized into four steps: (1) the original SVM model is built by training synthetic GPR data generated by finite difference time domain simulation and after data preprocessing, segmentation and feature extraction. (2) The classification accuracy of different kernel functions is compared with the cross-validation method and the penalty factor (c) of the SVM and the coefficient (σ2) of kernel functions are optimized by using the grid algorithm and the genetic algorithm. (3) To test the success of classification, this model is then verified and validated by applying it to another set of synthetic GPR data. The result shows a high success rate for classification. (4) This original classifier model is finally applied to a set of real GPR data to identify and classify voids. The result is less than ideal when compared with its application to synthetic data before the original model is improved. In general, this study shows that the SVM exhibits promising performance in the GPR identification of voids inside reinforced concrete. Nevertheless, the recognition of shape and distribution of voids may need further improvement. (paper)

16. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

Institute of Scientific and Technical Information of China (English)

2005-01-01

In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

17. A Support Vector Machine Hydrometeor Classification Algorithm for Dual-Polarization Radar

Directory of Open Access Journals (Sweden)

Nicoletta Roberto

2017-07-01

Full Text Available An algorithm based on a support vector machine (SVM is proposed for hydrometeor classification. The training phase is driven by the output of a fuzzy logic hydrometeor classification algorithm, i.e., the most popular approach for hydrometer classification algorithms used for ground-based weather radar. The performance of SVM is evaluated by resorting to a weather scenario, generated by a weather model; the corresponding radar measurements are obtained by simulation and by comparing results of SVM classification with those obtained by a fuzzy logic classifier. Results based on the weather model and simulations show a higher accuracy of the SVM classification. Objective comparison of the two classifiers applied to real radar data shows that SVM classification maps are spatially more homogenous (textural indices, energy, and homogeneity increases by 21% and 12% respectively and do not present non-classified data. The improvements found by SVM classifier, even though it is applied pixel-by-pixel, can be attributed to its ability to learn from the entire hyperspace of radar measurements and to the accurate training. The reliability of results and higher computing performance make SVM attractive for some challenging tasks such as its implementation in Decision Support Systems for helping pilots to make optimal decisions about changes inthe flight route caused by unexpected adverse weather.

18. Dynamic Heat Supply Prediction Using Support Vector Regression Optimized by Particle Swarm Optimization Algorithm

Directory of Open Access Journals (Sweden)

Meiping Wang

2016-01-01

Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.

19. A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine.

Directory of Open Access Journals (Sweden)

Fei Gao

Full Text Available For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a "soft-start" approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.

20. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

Directory of Open Access Journals (Sweden)

B Vinoth Kumar

2017-07-01

Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

1. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

International Nuclear Information System (INIS)

Nishiura, Daisuke; Sakaguchi, Hide

2011-01-01

Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

2. Forecasting systems reliability based on support vector regression with genetic algorithms

International Nuclear Information System (INIS)

Chen, K.-Y.

2007-01-01

This study applies a novel neural-network technique, support vector regression (SVR), to forecast reliability in engine systems. The aim of this study is to examine the feasibility of SVR in systems reliability prediction by comparing it with the existing neural-network approaches and the autoregressive integrated moving average (ARIMA) model. To build an effective SVR model, SVR's parameters must be set carefully. This study proposes a novel approach, known as GA-SVR, which searches for SVR's optimal parameters using real-value genetic algorithms, and then adopts the optimal parameters to construct the SVR models. A real reliability data for 40 suits of turbochargers were employed as the data set. The experimental results demonstrate that SVR outperforms the existing neural-network approaches and the traditional ARIMA models based on the normalized root mean square error and mean absolute percentage error

3. Global restructuring of the CPM-2 transport algorithm for vector and parallel processing

International Nuclear Information System (INIS)

Vujic, J.L.; Martin, W.R.

1989-01-01

The CPM-2 code is an assembly transport code based on the collision probability (CP) method. It can in principle be applied to global reactor problems, but its excessive computational demands prevent this application. Therefore, a new transport algorithm for CPM-2 has been developed for vector-parallel architectures, which has resulted in an overall factor of 20 speedup (wall clock) on the IBM 3090-600E. This paper presents the detailed results of this effort as well as a brief description of ongoing effort to remove some of the modeling limitations in CPM-2 that inhibit its use for global applications, such as the use of the pure CP treatment and the assumption of isotropic scattering

4. Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere

Energy Technology Data Exchange (ETDEWEB)

Qin Yi [School of Physics, University of New South Wales (Australia)]. E-mail: yi.qin@csiro.au; Box, Michael A. [School of Physics, University of New South Wales (Australia)

2006-01-15

Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function.

5. Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere

International Nuclear Information System (INIS)

Qin Yi; Box, Michael A.

2006-01-01

Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function

6. SOLAR FLARE PREDICTION USING SDO/HMI VECTOR MAGNETIC FIELD DATA WITH A MACHINE-LEARNING ALGORITHM

International Nuclear Information System (INIS)

Bobra, M. G.; Couvidat, S.

2015-01-01

We attempt to forecast M- and X-class solar flares using a machine-learning algorithm, called support vector machine (SVM), and four years of data from the Solar Dynamics Observatory's Helioseismic and Magnetic Imager, the first instrument to continuously map the full-disk photospheric vector magnetic field from space. Most flare forecasting efforts described in the literature use either line-of-sight magnetograms or a relatively small number of ground-based vector magnetograms. This is the first time a large data set of vector magnetograms has been used to forecast solar flares. We build a catalog of flaring and non-flaring active regions sampled from a database of 2071 active regions, comprised of 1.5 million active region patches of vector magnetic field data, and characterize each active region by 25 parameters. We then train and test the machine-learning algorithm and we estimate its performances using forecast verification metrics with an emphasis on the true skill statistic (TSS). We obtain relatively high TSS scores and overall predictive abilities. We surmise that this is partly due to fine-tuning the SVM for this purpose and also to an advantageous set of features that can only be calculated from vector magnetic field data. We also apply a feature selection algorithm to determine which of our 25 features are useful for discriminating between flaring and non-flaring active regions and conclude that only a handful are needed for good predictive abilities

7. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

Science.gov (United States)

Zuo, Chen; Pan, Zhibin; Liang, Hao

2018-03-01

Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

8. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

Directory of Open Access Journals (Sweden)

Daqing Zhang

2015-01-01

Full Text Available Blood-brain barrier (BBB is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration.

9. Feature Selection and Parameter Optimization of Support Vector Machines Based on Modified Artificial Fish Swarm Algorithms

Directory of Open Access Journals (Sweden)

Kuan-Cheng Lin

2015-01-01

Full Text Available Rapid advances in information and communication technology have made ubiquitous computing and the Internet of Things popular and practicable. These applications create enormous volumes of data, which are available for analysis and classification as an aid to decision-making. Among the classification methods used to deal with big data, feature selection has proven particularly effective. One common approach involves searching through a subset of the features that are the most relevant to the topic or represent the most accurate description of the dataset. Unfortunately, searching through this kind of subset is a combinatorial problem that can be very time consuming. Meaheuristic algorithms are commonly used to facilitate the selection of features. The artificial fish swarm algorithm (AFSA employs the intelligence underlying fish swarming behavior as a means to overcome optimization of combinatorial problems. AFSA has proven highly successful in a diversity of applications; however, there remain shortcomings, such as the likelihood of falling into a local optimum and a lack of multiplicity. This study proposes a modified AFSA (MAFSA to improve feature selection and parameter optimization for support vector machine classifiers. Experiment results demonstrate the superiority of MAFSA in classification accuracy using subsets with fewer features for given UCI datasets, compared to the original FASA.

10. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

Directory of Open Access Journals (Sweden)

Xiaoyi Zhou

2018-01-01

Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

11. PSCAD modeling of a two-level space vector pulse width modulation algorithm for power electronics education

Directory of Open Access Journals (Sweden)

Ahmet Mete Vural

2016-09-01

Full Text Available This paper presents the design details of a two-level space vector pulse width modulation algorithm in PSCAD that is able to generate pulses for three-phase two-level DC/AC converters with two different switching patterns. The presented FORTRAN code is generic and can be easily modified to meet many other kinds of space vector modulation strategies. The code is also editable for hardware programming. The new component is tested and verified by comparing its output as six gating signals with those of a similar component in MATLAB library. Moreover the component is used to generate digital signals for closed-loop control of STATCOM for reactive power compensation in PSCAD. This add-on can be an effective tool to give students better understanding of the space vector modulation algorithm for different control tasks in power electronics area, and can motivate them for learning.

12. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

Science.gov (United States)

Perlee, Caroline J.; Casasent, David P.

1990-09-01

Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

13. QSPR studies for predicting polarity parameter of organic compounds in methanol using support vector machine and enhanced replacement method.

Science.gov (United States)

2016-12-01

In the present work, enhanced replacement method (ERM) and support vector machine (SVM) were used for quantitative structure-property relationship (QSPR) studies of polarity parameter (p) of various organic compounds in methanol in reversed phase liquid chromatography based on molecular descriptors calculated from the optimized structures. Diverse kinds of molecular descriptors were calculated to encode the molecular structures of compounds, such as geometric, thermodynamic, electrostatic and quantum mechanical descriptors. The variable selection method of ERM was employed to select an optimum subset of descriptors. The five descriptors selected using ERM were used as inputs of SVM to predict the polarity parameter of organic compounds in methanol. The coefficient of determination, r 2 , between experimental and predicted polarity parameters for the prediction set by ERM and SVM were 0.952 and 0.982, respectively. Acceptable results specified that the ERM approach is a very effective method for variable selection and the predictive aptitude of the SVM model is superior to those obtained by ERM. The obtained results demonstrate that SVM can be used as a substitute influential modeling tool for QSPR studies.

14. The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

Directory of Open Access Journals (Sweden)

Wu Qidi

2014-01-01

Full Text Available Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

15. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

Directory of Open Access Journals (Sweden)

Utku Kose

2016-03-01

Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

16. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

Directory of Open Access Journals (Sweden)

Chih-Feng Chao

2015-01-01

Full Text Available The setting of parameters in the support vector machines (SVMs is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM. This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI, machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM. The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.

17. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

Science.gov (United States)

Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

2014-01-01

To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

18. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

Science.gov (United States)

Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

2014-01-01

Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385

19. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm

Directory of Open Access Journals (Sweden)

Yan Hong Chen

2016-01-01

Full Text Available This paper proposes a new electric load forecasting model by hybridizing the fuzzy time series (FTS and global harmony search algorithm (GHSA with least squares support vector machines (LSSVM, namely GHSA-FTS-LSSVM model. Firstly, the fuzzy c-means clustering (FCS algorithm is used to calculate the clustering center of each cluster. Secondly, the LSSVM is applied to model the resultant series, which is optimized by GHSA. Finally, a real-world example is adopted to test the performance of the proposed model. In this investigation, the proposed model is verified using experimental datasets from the Guangdong Province Industrial Development Database, and results are compared against autoregressive integrated moving average (ARIMA model and other algorithms hybridized with LSSVM including genetic algorithm (GA, particle swarm optimization (PSO, harmony search, and so on. The forecasting results indicate that the proposed GHSA-FTS-LSSVM model effectively generates more accurate predictive results.

20. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

International Nuclear Information System (INIS)

Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

2012-01-01

Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm 3 (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm 3 (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

1. Phytoplankton global mapping from space with a support vector machine algorithm

Science.gov (United States)

de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.

2014-11-01

In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data

2. Predicting Solar Flares Using SDO /HMI Vector Magnetic Data Products and the Random Forest Algorithm

Energy Technology Data Exchange (ETDEWEB)

Liu, Chang; Deng, Na; Wang, Haimin [Space Weather Research Laboratory, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States); Wang, Jason T. L., E-mail: chang.liu@njit.edu, E-mail: na.deng@njit.edu, E-mail: haimin.wang@njit.edu, E-mail: jason.t.wang@njit.edu [Department of Computer Science, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States)

2017-07-10

Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.

3. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

Science.gov (United States)

Yu, Fei; Hui, Mei; Zhao, Yue-jin

2009-08-01

The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

4. Intra-operative Vector Flow Imaging Using Ultrasound of the Ascending Aorta among 40 Patients with Normal, Stenotic and Replaced Aortic Valves

DEFF Research Database (Denmark)

Hansen, Kristoffer Lindskov; Møller-Sørensen, Hasse; Kjaergaard, Jesper

2016-01-01

Stenosis of the aortic valve gives rise to more complex blood flows with increased velocities. The angleindependent vector flow ultrasound technique transverse oscillation was employed intra-operatively on the ascending aorta of (I) 20 patients with a healthy aortic valve and 20 patients with aor...... replacement corrects some of these changes. Transverse oscillation may be useful for assessment of aortic stenosis and optimization of valve surgery. (E-mail: lindskov@gmail.com) 2016 World Federation for Ultrasound in Medicine & Biology...... with aortic stenosis before (IIa) and after (IIb) valve replacement. The results indicate that aortic stenosis increased flow complexity (p , 0.0001), induced systolic backflow (p , 0.003) and reduced systolic jet width (p , 0.0001). After valve replacement, the systolic backflow and jet width were normalized...

5. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

National Research Council Canada - National Science Library

Qi, Yuan

2000-01-01

In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

6. Soft sensor development and optimization of the commercial petrochemical plant integrating support vector regression and genetic algorithm

Directory of Open Access Journals (Sweden)

S.K. Lahiri

2009-09-01

Full Text Available Soft sensors have been widely used in the industrial process control to improve the quality of the product and assure safety in the production. The core of a soft sensor is to construct a soft sensing model. This paper introduces support vector regression (SVR, a new powerful machine learning methodbased on a statistical learning theory (SLT into soft sensor modeling and proposes a new soft sensing modeling method based on SVR. This paper presents an artificial intelligence based hybrid soft sensormodeling and optimization strategies, namely support vector regression – genetic algorithm (SVR-GA for modeling and optimization of mono ethylene glycol (MEG quality variable in a commercial glycol plant. In the SVR-GA approach, a support vector regression model is constructed for correlating the process data comprising values of operating and performance variables. Next, model inputs describing the process operating variables are optimized using genetic algorithm with a view to maximize the process performance. The SVR-GA is a new strategy for soft sensor modeling and optimization. The major advantage of the strategies is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics etc. is not required. Using SVR-GA strategy, a number of sets of optimized operating conditions were found. The optimized solutions, when verified in an actual plant, resulted in a significant improvement in the quality.

7. Replacing a native Wolbachia with a novel strain results in an increase in endosymbiont load and resistance to dengue virus in a mosquito vector.

Directory of Open Access Journals (Sweden)

Guowu Bian

Full Text Available Wolbachia is a maternally transmitted endosymbiotic bacterium that is estimated to infect up to 65% of insect species. The ability of Wolbachia to both induce pathogen interference and spread into mosquito vector populations makes it possible to develop Wolbachia as a biological control agent for vector-borne disease control. Although Wolbachia induces resistance to dengue virus (DENV, filarial worms, and Plasmodium in mosquitoes, species like Aedes polynesiensis and Aedes albopictus, which carry native Wolbachia infections, are able to transmit dengue and filariasis. In a previous study, the native wPolA in Ae. polynesiensis was replaced with wAlbB from Ae. albopictus, and resulted in the generation of the transinfected "MTB" strain with low susceptibility for filarial worms. In this study, we compare the dynamics of DENV serotype 2 (DENV-2 within the wild type "APM" strain and the MTB strain of Ae. polynesiensis by measuring viral infection in the mosquito whole body, midgut, head, and saliva at different time points post infection. The results show that wAlbB can induce a strong resistance to DENV-2 in the MTB mosquito. Evidence also supports that this resistance is related to a dramatic increase in Wolbachia density in the MTB's somatic tissues, including the midgut and salivary gland. Our results suggests that replacement of a native Wolbachia with a novel infection could serve as a strategy for developing a Wolbachia-based approach to target naturally infected insects for vector-borne disease control.

8. 3D magnetization vector inversion based on fuzzy clustering: inversion algorithm, uncertainty analysis, and application to geology differentiation

Science.gov (United States)

Sun, J.; Li, Y.

2017-12-01

Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to

9. Fault Diagnosis of Plunger Pump in Truck Crane Based on Relevance Vector Machine with Particle Swarm Optimization Algorithm

Directory of Open Access Journals (Sweden)

Wenliao Du

2013-01-01

Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.

10. DOA and Polarization Estimation Using an Electromagnetic Vector Sensor Uniform Circular Array Based on the ESPRIT Algorithm.

Science.gov (United States)

Wu, Na; Qu, Zhiyu; Si, Weijian; Jiao, Shuhong

2016-12-13

In array signal processing systems, the direction of arrival (DOA) and polarization of signals based on uniform linear or rectangular sensor arrays are generally obtained by rotational invariance techniques (ESPRIT). However, since the ESPRIT algorithm relies on the rotational invariant structure of the received data, it cannot be applied to electromagnetic vector sensor arrays (EVSAs) featuring uniform circular patterns. To overcome this limitation, a fourth-order cumulant-based ESPRIT algorithm is proposed in this paper, for joint estimation of DOA and polarization based on a uniform circular EVSA. The proposed algorithm utilizes the fourth-order cumulant to obtain a virtual extended array of a uniform circular EVSA, from which the pairs of rotation invariant sub-arrays are obtained. The ESPRIT algorithm and parameter pair matching are then utilized to estimate the DOA and polarization of the incident signals. The closed-form parameter estimation algorithm can effectively reduce the computational complexity of the joint estimation, which has been demonstrated by numerical simulations.

11. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for a Thin Solenoid with Uniform Current Density

Energy Technology Data Exchange (ETDEWEB)

Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-08-07

A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential Aθ is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of the algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing Bz and Aθ become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r2 in the some of the expressions.

12. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

Directory of Open Access Journals (Sweden)

Yu-Fei Gao

2017-04-01

Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

13. Annual Electric Load Forecasting by a Least Squares Support Vector Machine with a Fruit Fly Optimization Algorithm

Directory of Open Access Journals (Sweden)

Bao Wang

2012-11-01

Full Text Available The accuracy of annual electric load forecasting plays an important role in the economic and social benefits of electric power systems. The least squares support vector machine (LSSVM has been proven to offer strong potential in forecasting issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. As a novel meta-heuristic and evolutionary algorithm, the fruit fly optimization algorithm (FOA has the advantages of being easy to understand and fast convergence to the global optimal solution. Therefore, to improve the forecasting performance, this paper proposes a LSSVM-based annual electric load forecasting model that uses FOA to automatically determine the appropriate values of the two parameters for the LSSVM model. By taking the annual electricity consumption of China as an instance, the computational result shows that the LSSVM combined with FOA (LSSVM-FOA outperforms other alternative methods, namely single LSSVM, LSSVM combined with coupled simulated annealing algorithm (LSSVM-CSA, generalized regression neural network (GRNN and regression model.

14. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

Directory of Open Access Journals (Sweden)

A. A. Zolotin

2015-07-01

Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

15. Icing Forecasting of High Voltage Transmission Line Using Weighted Least Square Support Vector Machine with Fireworks Algorithm for Feature Selection

Directory of Open Access Journals (Sweden)

Tiannan Ma

2016-12-01

Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.

16. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

Science.gov (United States)

Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

2018-05-01

A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

17. Consistences for introducing more vector potentials in the same group, by BRST algorithm

International Nuclear Information System (INIS)

Doria, R.; Carvalho, F.A.R. de

1989-01-01

The BRS formalism for quantum formulation of gauge theory is analysed applying to extended models. The quantum effective Lagrangian of gauge is established, invariant under s and s→ for a system with vector potentials belong to one Abelian group of gauge. The BRS charge associated to the system is calculated. (M.C.K.)

18. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

Science.gov (United States)

W. Hasan, W. Z.

2018-01-01

The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

19. Prediction of Antimicrobial Peptides Based on Sequence Alignment and Support Vector Machine-Pairwise Algorithm Utilizing LZ-Complexity

Directory of Open Access Journals (Sweden)

Xin Yi Ng

2015-01-01

Full Text Available This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM- LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity.

20. A Hybrid Seasonal Mechanism with a Chaotic Cuckoo Search Algorithm with a Support Vector Regression Model for Electric Load Forecasting

Directory of Open Access Journals (Sweden)

Yongquan Dong

2018-04-01

Full Text Available Providing accurate electric load forecasting results plays a crucial role in daily energy management of the power supply system. Due to superior forecasting performance, the hybridizing support vector regression (SVR model with evolutionary algorithms has received attention and deserves to continue being explored widely. The cuckoo search (CS algorithm has the potential to contribute more satisfactory electric load forecasting results. However, the original CS algorithm suffers from its inherent drawbacks, such as parameters that require accurate setting, loss of population diversity, and easy trapping in local optima (i.e., premature convergence. Therefore, proposing some critical improvement mechanisms and employing an improved CS algorithm to determine suitable parameter combinations for an SVR model is essential. This paper proposes the SVR with chaotic cuckoo search (SVRCCS model based on using a tent chaotic mapping function to enrich the cuckoo search space and diversify the population to avoid trapping in local optima. In addition, to deal with the cyclic nature of electric loads, a seasonal mechanism is combined with the SVRCCS model, namely giving a seasonal SVR with chaotic cuckoo search (SSVRCCS model, to produce more accurate forecasting performances. The numerical results, tested by using the datasets from the National Electricity Market (NEM, Queensland, Australia and the New York Independent System Operator (NYISO, NY, USA, show that the proposed SSVRCCS model outperforms other alternative models.

1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

Directory of Open Access Journals (Sweden)

A H Sabry

Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

2. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

Energy Technology Data Exchange (ETDEWEB)

Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-08-24

A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

3. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

Science.gov (United States)

Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

2018-01-01

The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

4. A Classification Detection Algorithm Based on Joint Entropy Vector against Application-Layer DDoS Attack

Directory of Open Access Journals (Sweden)

Yuntao Zhao

2018-01-01

Full Text Available The application-layer distributed denial of service (AL-DDoS attack makes a great threat against cyberspace security. The attack detection is an important part of the security protection, which provides effective support for defense system through the rapid and accurate identification of attacks. According to the attacker’s different URL of the Web service, the AL-DDoS attack is divided into three categories, including a random URL attack and a fixed and a traverse one. In order to realize identification of attacks, a mapping matrix of the joint entropy vector is constructed. By defining and computing the value of EUPI and jEIPU, a visual coordinate discrimination diagram of entropy vector is proposed, which also realizes data dimension reduction from N to two. In terms of boundary discrimination and the region where the entropy vectors fall in, the class of AL-DDoS attack can be distinguished. Through the study of training data set and classification, the results show that the novel algorithm can effectively distinguish the web server DDoS attack from normal burst traffic.

5. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

Science.gov (United States)

Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

2016-01-01

With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

6. Aerodynamic Optimization of a Supersonic Bending Body Projectile by a Vector-Evaluated Genetic Algorithm

Science.gov (United States)

2016-12-01

of offspring populations, the Student’s t-distribution is used as the convergence method. Equations 10–12 are the mean , variance , and standard...ARL-CR-0810 ● DEC 2016 US Army Research Laboratory Aerodynamic Optimization of a Supersonic Bending Body Projectile by a Vector...not return it to the originator. ARL-CR-0810 ● DEC 2016 US Army Research Laboratory Aerodynamic Optimization of a

7. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

Science.gov (United States)

Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

2014-01-01

Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

8. Synchronized Scheme of Continuous Space-Vector PWM with the Real-Time Control Algorithms

DEFF Research Database (Denmark)

Oleschuk, V.; Blaabjerg, Frede

2004-01-01

This paper describes in details the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of three-phase voltage source inverters for adjustable speed ac drives. It is applied to a continuous scheme of voltage space vector modulation. The method is based...... their position inside clock-intervals. In order to provide smooth shock-less pulse-ratio changing and quarter-wave symmetry of the voltage waveforms, special synchronising signals are formed on the boundaries of the 60 clock-intervals. The process of gradual transition from continuous to discontinuous...

9. Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait.

Science.gov (United States)

Martins, Maria; Costa, Lino; Frizera, Anselmo; Ceres, Ramón; Santos, Cristina

2014-03-01

Walker devices are often prescribed incorrectly to patients, leading to the increase of dissatisfaction and occurrence of several problems, such as, discomfort and pain. Thus, it is necessary to objectively evaluate the effects that assisted gait can have on the gait patterns of walker users, comparatively to a non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information that often is difficult to interpret. This study addresses the problem of selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. For that purpose, it is presented an efficient approach that combines evolutionary techniques, based on genetic algorithms, and support vector machine algorithms, to discriminate differences between assisted and non-assisted gait with a walker with forearm supports. For comparison purposes, other classification algorithms are verified. Results with healthy subjects show that the main differences are characterized by balance and joints excursion in the sagittal plane. These results, confirmed by clinical evidence, allow concluding that this technique is an efficient feature selection approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

10. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

International Nuclear Information System (INIS)

Bouzid, M.; Benkherouf, H.; Benzadi, K.

2011-01-01

In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

11. A new model of flavonoids affinity towards P-glycoprotein: genetic algorithm-support vector machine with features selected by a modified particle swarm optimization algorithm.

Science.gov (United States)

Cui, Ying; Chen, Qinggang; Li, Yaxiao; Tang, Ling

2017-02-01

Flavonoids exhibit a high affinity for the purified cytosolic NBD (C-terminal nucleotide-binding domain) of P-glycoprotein (P-gp). To explore the affinity of flavonoids for P-gp, quantitative structure-activity relationship (QSAR) models were developed using support vector machines (SVMs). A novel method coupling a modified particle swarm optimization algorithm with random mutation strategy and a genetic algorithm coupled with SVM was proposed to simultaneously optimize the kernel parameters of SVM and determine the subset of optimized features for the first time. Using DRAGON descriptors to represent compounds for QSAR, three subsets (training, prediction and external validation set) derived from the dataset were employed to investigate QSAR. With excluding of the outlier, the correlation coefficient (R 2 ) of the whole training set (training and prediction) was 0.924, and the R 2 of the external validation set was 0.941. The root-mean-square error (RMSE) of the whole training set was 0.0588; the RMSE of the cross-validation of the external validation set was 0.0443. The mean Q 2 value of leave-many-out cross-validation was 0.824. With more informations from results of randomization analysis and applicability domain, the proposed model is of good predictive ability, stability.

12. Vector Control Algorithm for Electric Vehicle AC Induction Motor Based on Improved Variable Gain PID Controller

Directory of Open Access Journals (Sweden)

Gang Qin

2015-01-01

Full Text Available The acceleration performance of EV, which affects a lot of performances of EV such as start-up, overtaking, driving safety, and ride comfort, has become increasingly popular in recent researches. An improved variable gain PID control algorithm to improve the acceleration performance is proposed in this paper. The results of simulation with Matlab/Simulink demonstrate the effectiveness of the proposed algorithm through the control performance of motor velocity, motor torque, and three-phase current of motor. Moreover, it is investigated that the proposed controller is valid by comparison with the other PID controllers. Furthermore, the AC induction motor experiment set is constructed to verify the effect of proposed controller.

13. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm.

Science.gov (United States)

Sakumura, Yuichi; Koyama, Yutaro; Tokutake, Hiroaki; Hida, Toyoaki; Sato, Kazuo; Itoh, Toshio; Akamatsu, Takafumi; Shin, Woosuck

2017-02-04

Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs) at very low concentrations (ppb level). We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls) using gas chromatography/mass spectrometry (GC/MS), and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM) algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH₃CN, isoprene, 1-propanol) is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer.

14. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm

Directory of Open Access Journals (Sweden)

Yuichi Sakumura

2017-02-01

Full Text Available Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs at very low concentrations (ppb level. We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls using gas chromatography/mass spectrometry (GC/MS, and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH3CN, isoprene, 1-propanol is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer.

15. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

Directory of Open Access Journals (Sweden)

Mustafa Serter Uzer

2013-01-01

Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

16. A path algorithm for the support vector domain description and its application to medical imaging

DEFF Research Database (Denmark)

Sjöstrand, Karl; Hansen, Michael Sass; Larsson, Henrik B. W.

2007-01-01

the shape of the boundary and the proportion of observations that are regarded as outliers. Picking an appropriate amount of regularization is crucial in most applications but is, for computational reasons, commonly limited to a small collection of parameter values. This paper presents an algorithm where...... selection, but may also provide new information about a data set. We illustrate this potential of the method in two applications; one where we establish a sensible ordering among a set of corpora callosa outlines, and one where ischemic segments of the myocardium are detected in patients with acute...

17. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

Science.gov (United States)

Samba, A. S.

1985-01-01

The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

18. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

Science.gov (United States)

Dai, Wensheng

2014-01-01

Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

19. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

Science.gov (United States)

Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

2014-01-01

Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

20. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

Directory of Open Access Journals (Sweden)

Wensheng Dai

2014-01-01

Full Text Available Sales forecasting is one of the most important issues in managing information technology (IT chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR, is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA, temporal ICA (tICA, and spatiotemporal ICA (stICA to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

1. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

Science.gov (United States)

2018-06-01

Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

2. The Key Role of the Vector Optimization Algorithm and Robust Design Approach for the Design of Polygeneration Systems

Directory of Open Access Journals (Sweden)

Alfredo Gimelli

2018-04-01

Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.

3. Algorithms

polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

4. Forecasting of Power Grid Investment in China Based on Support Vector Machine Optimized by Differential Evolution Algorithm and Grey Wolf Optimization Algorithm

Directory of Open Access Journals (Sweden)

Shuyu Dai

2018-04-01

Full Text Available In recent years, the construction of China’s power grid has experienced rapid development, and its scale has leaped into the first place in the world. Accurate and effective prediction of power grid investment can not only help pool funds and rationally arrange investment in power grid construction, but also reduce capital costs and economic risks, which plays a crucial role in promoting power grid investment planning and construction process. In order to forecast the power grid investment of China accurately, firstly on the basis of analyzing the influencing factors of power grid investment, the influencing factors system for China’s power grid investment forecasting is constructed in this article. The method of grey relational analysis is used for screening the main influencing factors as the prediction model input. Then, a novel power grid investment prediction model based on DE-GWO-SVM (support vector machine optimized by differential evolution and grey wolf optimization algorithm is proposed. Next, two cases are taken for empirical analysis to prove that the DE-GWO-SVM model has strong generalization capacity and has achieved a good prediction effect for power grid investment forecasting in China. Finally, the DE-GWO-SVM model is adopted to forecast power grid investment in China from 2018 to 2022.

5. The combination of a histogram-based clustering algorithm and support vector machine for the diagnosis of osteoporosis

International Nuclear Information System (INIS)

Heo, Min Suk; Kavitha, Muthu Subash; Asano, Akira; Taguchi, Akira

2013-01-01

To prevent low bone mineral density (BMD), that is, osteoporosis, in postmenopausal women, it is essential to diagnose osteoporosis more precisely. This study presented an automatic approach utilizing a histogram-based automatic clustering (HAC) algorithm with a support vector machine (SVM) to analyse dental panoramic radiographs (DPRs) and thus improve diagnostic accuracy by identifying postmenopausal women with low BMD or osteoporosis. We integrated our newly-proposed histogram-based automatic clustering (HAC) algorithm with our previously-designed computer-aided diagnosis system. The extracted moment-based features (mean, variance, skewness, and kurtosis) of the mandibular cortical width for the radial basis function (RBF) SVM classifier were employed. We also compared the diagnostic efficacy of the SVM model with the back propagation (BP) neural network model. In this study, DPRs and BMD measurements of 100 postmenopausal women patients (aged >50 years), with no previous record of osteoporosis, were randomly selected for inclusion. The accuracy, sensitivity, and specificity of the BMD measurements using our HAC-SVM model to identify women with low BMD were 93.0% (88.0%-98.0%), 95.8% (91.9%-99.7%) and 86.6% (79.9%-93.3%), respectively, at the lumbar spine; and 89.0% (82.9%-95.1%), 96.0% (92.2%-99.8%) and 84.0% (76.8%-91.2%), respectively, at the femoral neck. Our experimental results predict that the proposed HAC-SVM model combination applied on DPRs could be useful to assist dentists in early diagnosis and help to reduce the morbidity and mortality associated with low BMD and osteoporosis.

6. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

Directory of Open Access Journals (Sweden)

Shuyu Dai

2018-01-01

7. Inverse Modeling of Soil Hydraulic Parameters Based on a Hybrid of Vector-Evaluated Genetic Algorithm and Particle Swarm Optimization

Directory of Open Access Journals (Sweden)

Yi-Bo Li

2018-01-01

Full Text Available The accurate estimation of soil hydraulic parameters (θs, α, n, and Ks of the van Genuchten–Mualem model has attracted considerable attention. In this study, we proposed a new two-step inversion method, which first estimated the hydraulic parameter θs using objective function by the final water content, and subsequently estimated the soil hydraulic parameters α, n, and Ks, using a vector-evaluated genetic algorithm and particle swarm optimization (VEGA-PSO method based on objective functions by cumulative infiltration and infiltration rate. The parameters were inversely estimated for four types of soils (sand, loam, silt, and clay under an in silico experiment simulating the tension disc infiltration at three initial water content levels. The results indicated that the method is excellent and robust. Because the objective function had multilocal minima in a tiny range near the true values, inverse estimation of the hydraulic parameters was difficult; however, the estimated soil water retention curves and hydraulic conductivity curves were nearly identical to the true curves. In addition, the proposed method was able to estimate the hydraulic parameters accurately despite substantial measurement errors in initial water content, final water content, and cumulative infiltration, proving that the method was feasible and practical for field application.

8. Determination of foodborne pathogenic bacteria by multiplex PCR-microchip capillary electrophoresis with genetic algorithm-support vector regression optimization.

Science.gov (United States)

Li, Yongxin; Li, Yuanqian; Zheng, Bo; Qu, Lingli; Li, Can

2009-06-08

A rapid and sensitive method based on microchip capillary electrophoresis with condition optimization of genetic algorithm-support vector regression (GA-SVR) was developed and applied to simultaneous analysis of multiplex PCR products of four foodborne pathogenic bacteria. Four pairs of oligonucleotide primers were designed to exclusively amplify the targeted gene of Vibrio parahemolyticus, Salmonella, Escherichia coli (E. coli) O157:H7, Shigella and the quadruplex PCR parameters were optimized. At the same time, GA-SVR was employed to optimize the separation conditions of DNA fragments in microchip capillary electrophoresis. The proposed method was applied to simultaneously detect the multiplex PCR products of four foodborne pathogenic bacteria under the optimal conditions within 8 min. The levels of detection were as low as 1.2 x 10(2) CFU mL(-1) of Vibrio parahemolyticus, 2.9 x 10(2) CFU mL(-1) of Salmonella, 8.7 x 10(1) CFU mL(-1) of E. coli O157:H7 and 5.2 x 10(1) CFU mL(-1) of Shigella, respectively. The relative standard deviation of migration time was in the range of 0.74-2.09%. The results demonstrated that the good resolution and less analytical time were achieved due to the application of the multivariate strategy. This study offers an efficient alternative to routine foodborne pathogenic bacteria detection in a fast, reliable, and sensitive way.

9. A Structurally Simplified Hybrid Model of Genetic Algorithm and Support Vector Machine for Prediction of Chlorophyll a in Reservoirs

Directory of Open Access Journals (Sweden)

Jieqiong Su

2015-04-01

Full Text Available With decreasing water availability as a result of climate change and human activities, analysis of the influential factors and variation trends of chlorophyll a has become important to prevent reservoir eutrophication and ensure water supply safety. In this paper, a structurally simplified hybrid model of the genetic algorithm (GA and the support vector machine (SVM was developed for the prediction of monthly concentration of chlorophyll a in the Miyun Reservoir of northern China over the period from 2000 to 2010. Based on the influence factor analysis, the four most relevant influence factors of chlorophyll a (i.e., total phosphorus, total nitrogen, permanganate index, and reservoir storage were extracted using the method of feature selection with the GA, which simplified the model structure, making it more practical and efficient for environmental management. The results showed that the developed simplified GA-SVM model could solve nonlinear problems of complex system, and was suitable for the simulation and prediction of chlorophyll a with better performance in accuracy and efficiency in the Miyun Reservoir.

10. Time Series Analysis and Forecasting for Wind Speeds Using Support Vector Regression Coupled with Artificial Intelligent Algorithms

Directory of Open Access Journals (Sweden)

Ping Jiang

2015-01-01

Full Text Available Wind speed/power has received increasing attention around the earth due to its renewable nature as well as environmental friendliness. With the global installed wind power capacity rapidly increasing, wind industry is growing into a large-scale business. Reliable short-term wind speed forecasts play a practical and crucial role in wind energy conversion systems, such as the dynamic control of wind turbines and power system scheduling. In this paper, an intelligent hybrid model for short-term wind speed prediction is examined; the model is based on cross correlation (CC analysis and a support vector regression (SVR model that is coupled with brainstorm optimization (BSO and cuckoo search (CS algorithms, which are successfully utilized for parameter determination. The proposed hybrid models were used to forecast short-term wind speeds collected from four wind turbines located on a wind farm in China. The forecasting results demonstrate that the intelligent hybrid models outperform single models for short-term wind speed forecasting, which mainly results from the superiority of BSO and CS for parameter optimization.

11. Comparison Algorithm Kernels on Support Vector Machine (SVM To Compare The Trend Curves with Curves Online Forex Trading

Directory of Open Access Journals (Sweden)

irfan abbas

2017-01-01

12. Algorithms

to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

13. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

Directory of Open Access Journals (Sweden)

2015-09-01

Full Text Available This paper considered a six-phase (asymmetrical induction motor, kept 30° phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs and all four dc sources are deliberately kept isolated. Therefore, zero-sequence/homopolar current components cannot flow. The original and effective power sharing algorithm is proposed in this paper with three variables (degree of freedom based on synchronous field oriented control (FOC. A standard three-level space vector pulse width modulation (SVPWM by nearest three vectors (NTVs approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/Simulink-PLECS software of whole ac drive system by observing the dynamic behaviors in different designed condition. Set of results are provided in this paper, which confirms a good agreement with theoretical development.

14. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

Science.gov (United States)

Shastri, Niket; Pathak, Kamlesh

2018-05-01

The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

15. Lagrangian analysis of vector and tensor fields: Algorithmic foundations and applications in medical imaging and computational fluid dynamics

OpenAIRE

Ding, Zi'ang

2016-01-01

Both vector and tensor fields are important mathematical tools used to describe the physics of many phenomena in science and engineering. Effective vector and tensor field visualization techniques are therefore needed to interpret and analyze the corresponding data and achieve new insight into the considered problem. This dissertation is concerned with the extraction of important structural properties from vector and tensor datasets. Specifically, we present a unified approach for the charact...

16. Kochen-Specker vectors

International Nuclear Information System (INIS)

Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

2005-01-01

We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

17. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

International Nuclear Information System (INIS)

Majumdar, A.; Makowitz, H.

1987-10-01

With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

18. Vector Network Coding

OpenAIRE

2010-01-01

We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

19. Algorithms

ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

20. Hybrid genetic algorithm tuned support vector machine regression for wave transmission prediction of horizontally interlaced multilayer moored floating pipe breakwater

Digital Repository Service at National Institute of Oceanography (India)

Patil, S.G.; Mandal, S.; Hegde, A.V.; Muruganandam, A.

Support Vector Machine (SVM) works on structural risk minimization principle that has greater generalization ability and is superior to the empirical risk minimization principle as adopted in conventional neural network models. However...

1. Raster images vectorization system

OpenAIRE

Genytė, Jurgita

2006-01-01

The problem of raster images vectorization was analyzed and researched in this work. Existing vectorization systems are quite expensive, the results are inaccurate, and the manual vectorization of a large number of drafts is impossible. That‘s why our goal was to design and develop a new raster images vectorization system using our suggested automatic vectorization algorithm and the way to record results in a new universal vectorial file format. The work consists of these main parts: analysis...

2. A two-stage algorithm for Clostridium difficile including PCR: can we replace the toxin EIA?

Science.gov (United States)

Orendi, J M; Monnery, D J; Manzoor, S; Hawkey, P M

2012-01-01

A two step, three-test algorithm for Clostridium difficile infection (CDI) was reviewed. Stool samples were tested by enzyme immunoassays for C. difficile common antigen glutamate dehydrogenase (G) and toxin A/B (T). Samples with discordant results were tested by polymerase chain reaction detecting the toxin B gene (P). The algorithm quickly identified patients with detectable toxin A/B, whereas a large group of patients excreting toxigenic C. difficile but with toxin A/B production below detection level (G(+)T(-)P(+)) was identified separately. The average white blood cell count in patients with a G(+)T(+) result was higher than in those with a G(+)T(-)P(+) result. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

3. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

Science.gov (United States)

Sargent, Jeff Scott

1988-01-01

A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

4. Algorithms

algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

5. Algorithms

algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

6. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

DEFF Research Database (Denmark)

Padmanaban, Sanjeevikumar; Grandi, Gabriele; Blaabjerg, Frede

2015-01-01

This paper considered a six-phase (asymmetrical) induction motor, kept 30 phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs) and all four dc sources are deliberately kept isolated......) by nearest three vectors (NTVs) approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/ Simulink-PLECS software) of whole ac drive system by observing the dynamic behaviors in different designed condition. Set...

7. Control algorithm for the inverter fed induction motor drive with DC current feedback loop based on principles of the vector control

Energy Technology Data Exchange (ETDEWEB)

Vuckovic, V.; Vukosavic, S. (Electrical Engineering Inst. Nikola Tesla, Viktora Igoa 3, Belgrade, 11000 (Yugoslavia))

1992-01-01

This paper brings out a control algorithm for VSI fed induction motor drives based on the converter DC link current feedback. It is shown that the speed and flux can be controlled over the wide speed and load range quite satisfactorily for simpler drives. The base commands of both the inverter voltage and frequency are proportional to the reference speed, but each of them is further modified by the signals derived from the DC current sensor. The algorithm is based on the equations well known from the vector control theory, and is aimed to obtain the constant rotor flux and proportionality between the electrical torque, the slip frequency and the active component of the stator current. In this way, the problems of slip compensation, Ri compensation and correction of U/f characteristics are solved in the same time. Analytical considerations and computer simulations of the proposed control structure are in close agreement with the experimental results measured on a prototype drive.

8. Algorithms

will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

9. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

Directory of Open Access Journals (Sweden)

Shuyu Dai

2018-04-01

Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

10. A NEW IMAGE RETRIEVAL ALGORITHM BASED ON VECTOR QUANTIFICATION%一种新的基于矢量量化的图像检索算法

Institute of Scientific and Technical Information of China (English)

冀鑫; 冀小平

2016-01-01

针对目前基于颜色的图像检索算法在颜色特征提取的不足，提出一种新的颜色特征提取算法。利用 LBG 算法对 HSI 空间的颜色信息矢量量化，然后统计图像中各个码字出现的频数，形成颜色直方图。这样在提取颜色特征过程中，尽可能地降低图像原有特征失真。同时通过设定门限值，多次实验比较查全率和查准率，找到较为满意的门限值，使检索算法更加完善。实验结果表明，该算法能有效地提高图像检索精准度。%We put forward a new colour feature extraction algorithm for the shortcoming of present colour-based image retrieval algorithm in colour feature extraction.First,the algorithm uses LBG algorithm to carry out vector quantification on colour information in HSI space,and then counts the appearance frequency of each code word in the image to form colour histogram.So in the process of colour feature extraction the distortion of original image features can be reduced as far as possible.Meanwhile,by setting the threshold value we compared the recall and precision rates through a couple of the experiments until a satisfied threshold value was found,thus made the retrieval method more perfect.Experimental results showed that the new algorithm could effectively improve the accuracy of image retrieval.

11. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

KAUST Repository

Oh, Duk-Soon; Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.

2017-01-01

A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

12. BDDC Algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

KAUST Repository

Oh, Duk-Soon

2017-06-13

A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.

13. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

International Nuclear Information System (INIS)

Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

2011-01-01

The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality

14. Application of a support vector machine algorithm to the safety precaution technique of medium-low pressure gas regulators

Science.gov (United States)

Hao, Xuejun; An, Xaioran; Wu, Bo; He, Shaoping

2018-02-01

In the gas pipeline system, safe operation of a gas regulator determines the stability of the fuel gas supply, and the medium-low pressure gas regulator of the safety precaution system is not perfect at the present stage in the Beijing Gas Group; therefore, safety precaution technique optimization has important social and economic significance. In this paper, according to the running status of the medium-low pressure gas regulator in the SCADA system, a new method for gas regulator safety precaution based on the support vector machine (SVM) is presented. This method takes the gas regulator outlet pressure data as input variables of the SVM model, the fault categories and degree as output variables, which will effectively enhance the precaution accuracy as well as save significant manpower and material resources.

15. Support vector machine and mel frequency Cepstral coefficient based algorithm for hand gestures and bidirectional speech to text device

Science.gov (United States)

Balbin, Jessie R.; Padilla, Dionis A.; Fausto, Janette C.; Vergara, Ernesto M.; Garcia, Ramon G.; Delos Angeles, Bethsedea Joy S.; Dizon, Neil John A.; Mardo, Mark Kevin N.

2017-02-01

This research is about translating series of hand gesture to form a word and produce its equivalent sound on how it is read and said in Filipino accent using Support Vector Machine and Mel Frequency Cepstral Coefficient analysis. The concept is to detect Filipino speech input and translate the spoken words to their text form in Filipino. This study is trying to help the Filipino deaf community to impart their thoughts through the use of hand gestures and be able to communicate to people who do not know how to read hand gestures. This also helps literate deaf to simply read the spoken words relayed to them using the Filipino speech to text system.

16. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

Science.gov (United States)

Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

2018-06-01

The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

17. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

Science.gov (United States)

Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

2017-07-01

Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

18. Analysis and Speed Ripple Mitigation of a Space Vector Pulse Width Modulation-Based Permanent Magnet Synchronous Motor with a Particle Swarm Optimization Algorithm

Directory of Open Access Journals (Sweden)

Xing Liu

2016-11-01

Full Text Available A method is proposed for reducing speed ripple of permanent magnet synchronous motors (PMSMs controlled by space vector pulse width modulation (SVPWM. A flux graph and mathematics are used to analyze the speed ripple characteristics of the PMSM. Analysis indicates that the 6P (P refers to pole pairs of the PMSM time harmonic of rotor mechanical speed is the main harmonic component in the SVPWM control PMSM system. To reduce PMSM speed ripple, harmonics are superposed on a SVPWM reference signal. A particle swarm optimization (PSO algorithm is proposed to determine the optimal phase and multiplier coefficient of the superposed harmonics. The results of a Fourier decomposition and an optimized simulation model verified the accuracy of the analysis as well as the effectiveness of the speed ripple reduction methods, respectively.

19. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

International Nuclear Information System (INIS)

Lenhardt, L; Zeković, I; Dramićanin, T; Dramićanin, M D; Tešić, Ž; Milojković-Opsenica, D

2014-01-01

In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm −1 and at a spectral resolution of 4 cm −1 . These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey. (paper)

20. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

Science.gov (United States)

Lenhardt, L.; Zeković, I.; Dramićanin, T.; Tešić, Ž.; Milojković-Opsenica, D.; Dramićanin, M. D.

2014-09-01

In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm-1 and at a spectral resolution of 4 cm-1. These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey.

1. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

Directory of Open Access Journals (Sweden)

Xiang-ming Gao

2017-01-01

Full Text Available Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD and support vector machine (SVM optimized with an artificial bee colony (ABC algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

2. Short-Term Load Forecasting Based on Wavelet Transform and Least Squares Support Vector Machine Optimized by Fruit Fly Optimization Algorithm

Directory of Open Access Journals (Sweden)

Wei Sun

2015-01-01

Full Text Available Electric power is a kind of unstorable energy concerning the national welfare and the people’s livelihood, the stability of which is attracting more and more attention. Because the short-term power load is always interfered by various external factors with the characteristics like high volatility and instability, a single model is not suitable for short-term load forecasting due to low accuracy. In order to solve this problem, this paper proposes a new model based on wavelet transform and the least squares support vector machine (LSSVM which is optimized by fruit fly algorithm (FOA for short-term load forecasting. Wavelet transform is used to remove error points and enhance the stability of the data. Fruit fly algorithm is applied to optimize the parameters of LSSVM, avoiding the randomness and inaccuracy to parameters setting. The result of implementation of short-term load forecasting demonstrates that the hybrid model can be used in the short-term forecasting of the power system.

3. Short-Term Wind Speed Forecasting Using the Data Processing Approach and the Support Vector Machine Model Optimized by the Improved Cuckoo Search Parameter Estimation Algorithm

Directory of Open Access Journals (Sweden)

Chen Wang

2016-01-01

Full Text Available Power systems could be at risk when the power-grid collapse accident occurs. As a clean and renewable resource, wind energy plays an increasingly vital role in reducing air pollution and wind power generation becomes an important way to produce electrical power. Therefore, accurate wind power and wind speed forecasting are in need. In this research, a novel short-term wind speed forecasting portfolio has been proposed using the following three procedures: (I data preprocessing: apart from the regular normalization preprocessing, the data are preprocessed through empirical model decomposition (EMD, which reduces the effect of noise on the wind speed data; (II artificially intelligent parameter optimization introduction: the unknown parameters in the support vector machine (SVM model are optimized by the cuckoo search (CS algorithm; (III parameter optimization approach modification: an improved parameter optimization approach, called the SDCS model, based on the CS algorithm and the steepest descent (SD method is proposed. The comparison results show that the simple and effective portfolio EMD-SDCS-SVM produces promising predictions and has better performance than the individual forecasting components, with very small root mean squared errors and mean absolute percentage errors.

4. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

Science.gov (United States)

Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

2017-01-01

Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

5. Efficient combination of a 3D Quasi-Newton inversion algorithm and a vector dual-primal finite element tearing and interconnecting method

International Nuclear Information System (INIS)

Voznyuk, I; Litman, A; Tortel, H

2015-01-01

A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database. (paper)

6. Image Coding Based on Address Vector Quantization.

Science.gov (United States)

Feng, Yushu

Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

7. No evidence for the use of DIR, D-D fusions, chromosome 15 open reading frames or VH replacement in the peripheral repertoire was found on application of an improved algorithm, JointML, to 6329 human immunoglobulin H rearrangements

DEFF Research Database (Denmark)

Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R

2006-01-01

gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum...

8. Improved Accuracy of Myocardial Perfusion SPECT for the Detection of Coronary Artery Disease by Utilizing a Support Vector Machines Algorithm

Science.gov (United States)

Arsanjani, Reza; Xu, Yuan; Dey, Damini; Fish, Matthews; Dorbala, Sharmila; Hayes, Sean; Berman, Daniel; Germano, Guido; Slomka, Piotr

2012-01-01

We aimed to improve the diagnostic accuracy of automatic myocardial perfusion SPECT (MPS) interpretation analysis for prediction of coronary artery disease (CAD) by integrating several quantitative perfusion and functional variables for non-corrected (NC) data by support vector machines (SVM), a computer method for machine learning. Methods 957 rest/stress 99mtechnetium gated MPS NC studies from 623 consecutive patients with correlating invasive coronary angiography and 334 with low likelihood of CAD (LLK < 5% ) were assessed. Patients with stenosis ≥ 50% in left main or ≥ 70% in all other vessels were considered abnormal. Total perfusion deficit (TPD) was computed automatically. In addition, ischemic changes (ISCH) and ejection fraction changes (EFC) between stress and rest were derived by quantitative software. The SVM was trained using a group of 125 pts (25 LLK, 25 0-, 25 1-, 25 2- and 25 3-vessel CAD) using above quantitative variables and second order polynomial fitting. The remaining patients (N = 832) were categorized based on probability estimates, with CAD defined as (probability estimate ≥ 0.50). The diagnostic accuracy of SVM was also compared to visual segmental scoring by two experienced readers. Results Sensitivity of SVM (84%) was significantly better than ISCH (75%, p < 0.05) and EFC (31%, p < 0.05). Specificity of SVM (88%) was significantly better than that of TPD (78%, p < 0.05) and EFC (77%, p < 0.05). Diagnostic accuracy of SVM (86%) was significantly better than TPD (81%), ISCH (81%), or EFC (46%) (p < 0.05 for all). The Receiver-operator-characteristic area-under-the-curve (ROC-AUC) for SVM (0.92) was significantly better than TPD (0.90), ISCH (0.87), and EFC (0.60) (p < 0.001 for all). Diagnostic accuracy of SVM was comparable to the overall accuracy of both visual readers (85% vs. 84%, p < 0.05). ROC-AUC for SVM (0.92) was significantly better than that of both visual readers (0.87 and 0.88, p < 0.03). Conclusion Computational

9. A median filter approach for correcting errors in a vector field

Science.gov (United States)

Schultz, H.

1985-01-01

Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

10. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

Energy Technology Data Exchange (ETDEWEB)

Jaskowiak, J; Ahmad, S; Ali, I [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

2015-06-15

Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were used to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased

11. Vectorized Monte Carlo

International Nuclear Information System (INIS)

Brown, F.B.

1981-01-01

Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

12. The impact of TV mass media campaigns on calls to a National Quitline and the use of prescribed nicotine replacement therapy: a structural vector autoregression analysis.

Science.gov (United States)

Haghpanahan, Houra; Mackay, Daniel F; Pell, Jill P; Bell, David; Langley, Tessa; Haw, Sally

2017-07-01

13. Characterization and classification of seven citrus herbs by liquid chromatography-quadrupole time-of-flight mass spectrometry and genetic algorithm optimized support vector machines.

Science.gov (United States)

Duan, Li; Guo, Long; Liu, Ke; Liu, E-Hu; Li, Ping

2014-04-25

Citrus herbs have been widely used in traditional medicine and cuisine in China and other countries since the ancient time. However, the authentication and quality control of Citrus herbs has always been a challenging task due to their similar morphological characteristics and the diversity of the multi-components existed in the complicated matrix. In the present investigation, we developed a novel strategy to characterize and classify seven Citrus herbs based on chromatographic analysis and chemometric methods. Firstly, the chemical constituents in seven Citrus herbs were globally characterized by liquid chromatography combined with quadrupole time-of-flight mass spectrometry (LC-QTOF-MS). Based on their retention time, UV spectra and MS fragmentation behavior, a total of 75 compounds were identified or tentatively characterized in these herbal medicines. Secondly, a segmental monitoring method based on LC-variable wavelength detection was developed for simultaneous quantification of ten marker compounds in these Citrus herbs. Thirdly, based on the contents of the ten analytes, genetic algorithm optimized support vector machines (GA-SVM) was employed to differentiate and classify the 64 samples covering these seven herbs. The obtained classifier showed good prediction performance and the overall prediction accuracy reached 96.88%. The proposed strategy is expected to provide new insight for authentication and quality control of traditional herbs. Copyright © 2014 Elsevier B.V. All rights reserved.

14. Multiplex protein pattern unmixing using a non-linear variable-weighted support vector machine as optimized by a particle swarm optimization algorithm.

Science.gov (United States)

Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin

2016-01-15

Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.

15. Design optimization of tailor-rolled blank thin-walled structures based on ɛ-support vector regression technique and genetic algorithm

Science.gov (United States)

Duan, Libin; Xiao, Ning-cong; Li, Guangyao; Cheng, Aiguo; Chen, Tao

2017-07-01

Tailor-rolled blank thin-walled (TRB-TH) structures have become important vehicle components owing to their advantages of light weight and crashworthiness. The purpose of this article is to provide an efficient lightweight design for improving the energy-absorbing capability of TRB-TH structures under dynamic loading. A finite element (FE) model for TRB-TH structures is established and validated by performing a dynamic axial crash test. Different material properties for individual parts with different thicknesses are considered in the FE model. Then, a multi-objective crashworthiness design of the TRB-TH structure is constructed based on the ɛ-support vector regression (ɛ-SVR) technique and non-dominated sorting genetic algorithm-II. The key parameters (C, ɛ and σ) are optimized to further improve the predictive accuracy of ɛ-SVR under limited sample points. Finally, the technique for order preference by similarity to the ideal solution method is used to rank the solutions in Pareto-optimal frontiers and find the best compromise optima. The results demonstrate that the light weight and crashworthiness performance of the optimized TRB-TH structures are superior to their uniform thickness counterparts. The proposed approach provides useful guidance for designing TRB-TH energy absorbers for vehicle bodies.

16. Sustainability Evaluation of Power Grid Construction Projects Using Improved TOPSIS and Least Square Support Vector Machine with Modified Fly Optimization Algorithm

Directory of Open Access Journals (Sweden)

Dongxiao Niu

2018-01-01

Full Text Available The electric power industry is of great significance in promoting social and economic development and improving people’s living standards. Power grid construction is a necessary part of infrastructure construction, whose sustainability plays an important role in economic development, environmental protection and social progress. In order to effectively evaluate the sustainability of power grid construction projects, in this paper, we first identified 17 criteria from four dimensions including economy, technology, society and environment to establish the evaluation criteria system. After that, the grey incidence analysis was used to modify the traditional Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS, which made it possible to evaluate the sustainability of electric power construction projects based on visual angle of similarity and nearness. Then, in order to simplify the procedure of experts scoring and computation, on the basis of evaluation results of the improved TOPSIS, the model using Modified Fly Optimization Algorithm (MFOA to optimize the Least Square Support Vector Machine (LSSVM was established. Finally, a numerical example was given to demonstrate the effectiveness of the proposed model.

17. Ankle replacement

Science.gov (United States)

Ankle arthroplasty - total; Total ankle arthroplasty; Endoprosthetic ankle replacement; Ankle surgery ... Ankle replacement surgery is most often done while you are under general anesthesia. This means you will ...

18. A verified LLL algorithm

NARCIS (Netherlands)

Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

2018-01-01

The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

19. Advanced method used for hypertension’s risk factors stratiﬁcation: support ‎vector machines and gravitational search algorithm

Directory of Open Access Journals (Sweden)

Alireza Khosravi

2015-12-01

Full Text Available BACKGROUND: The aim of this study is to present an objective method based on support vector machines (SVMs and gravitational search algorithm (GSA which is initially utilized for recognition the pattern among risk factors and hypertension (HTN to stratify and analysis HTN’s risk factors in an Iranian urban population. METHODS: This community-based and cross-sectional research has been designed based on the probabilistic sample of residents of Isfahan, Iran, aged 19 years or over from 2001 to 2007. One of the household members was randomly selected from different age groups. Selected individuals were invited to a predefined health center to be educated on how to collect 24-hour urine sample as well as learning about topographic parameters and blood pressure measurement. The data from both the estimated and measured blood pressure [for both systolic blood pressure (SBP and diastolic blood pressure (DBP] demonstrated that optimized SVMs have a highest estimation potential. RESULTS: This result was particularly more evident when SVMs performance is evaluated with regression and generalized linear modeling (GLM as common methods. Blood pressure risk factors impact analysis shows that age has the highest impact level on SBP while it falls second on the impact level ranking on DBP. The results also showed that body mass index (BMI falls first on the impact level ranking on DBP while have a lower impact on SBP. CONCLUSION: Our analysis suggests that salt intake could efficiently influence both DBP and SBP with greater impact level on SBP. Therefore, controlling salt intake may lead to not only control of HTN but also its prevention.

20. [Application of support vector machine-recursive feature elimination algorithm in Raman spectroscopy for differential diagnosis of benign and malignant breast diseases].

Science.gov (United States)

Zhang, Haipeng; Fu, Tong; Zhang, Zhiru; Fan, Zhimin; Zheng, Chao; Han, Bing

2014-08-01

To explore the value of application of support vector machine-recursive feature elimination (SVM-RFE) method in Raman spectroscopy for differential diagnosis of benign and malignant breast diseases. Fresh breast tissue samples of 168 patients (all female; ages 22-75) were obtained by routine surgical resection from May 2011 to May 2012 at the Department of Breast Surgery, the First Hospital of Jilin University. Among them, there were 51 normal tissues, 66 benign and 51 malignant breast lesions. All the specimens were assessed by Raman spectroscopy, and the SVM-RFE algorithm was used to process the data and build the mathematical model. Mahalanobis distance and spectral residuals were used as discriminating criteria to evaluate this data-processing method. 1 800 Raman spectra were acquired from the fresh samples of human breast tissues. Based on spectral profiles, the presence of 1 078, 1 267, 1 301, 1 437, 1 653, and 1 743 cm(-1) peaks were identified in the normal tissues; and 1 281, 1 341, 1 381, 1 417, 1 465, 1 530, and 1 637 cm(-1) peaks were found in the benign and malignant tissues. The main characteristic peaks differentiating benign and malignant lesions were 1 340 and 1 480 cm(-1). The accuracy of SVM-RFE in discriminating normal and malignant lesions was 100.0%, while that in the assessment of benign lesions was 93.0%. There are distinct differences among the Raman spectra of normal, benign and malignant breast tissues, and SVM-RFE method can be used to build differentiation model of breast lesions.

1. Optimization of vitamin K antagonist drug dose finding by replacement of the international normalized ratio by a bidirectional factor : validation of a new algorithm

NARCIS (Netherlands)

Beinema, M J; van der Meer, F J M; Brouwers, J R B J; Rosendaal, F R

2016-01-01

UNLABELLED: Essentials We developed a new algorithm to optimize vitamin K antagonist dose finding. Validation was by comparing actual dosing to algorithm predictions. Predicted and actual dosing of well performing centers were highly associated. The method is promising and should be tested in a

2. Exact Solutions for Internuclear Vectors and Backbone Dihedral Angles from NH Residual Dipolar Couplings in Two Media, and their Application in a Systematic Search Algorithm for Determining Protein Backbone Structure

International Nuclear Information System (INIS)

Wang Lincong; Donald, Bruce Randall

2004-01-01

We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics

3. Knee Replacement

Science.gov (United States)

Knee replacement is surgery for people with severe knee damage. Knee replacement can relieve pain and allow you to ... Your doctor may recommend it if you have knee pain and medicine and other treatments are not ...

4. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

Science.gov (United States)

Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

5. From vectors to mnesors

OpenAIRE

Champenois, Gilles

2007-01-01

The mnesor theory is the adaptation of vectors to artificial intelligence. The scalar field is replaced by a lattice. Addition becomes idempotent and multiplication is interpreted as a selection operation. We also show that mnesors can be the foundation for a linear calculus.

6. Capacity of non-invasive hepatic fibrosis algorithms to replace transient elastography to exclude cirrhosis in people with hepatitis C virus infection: A multi-centre observational study.

Science.gov (United States)

Kelly, Melissa Louise; Riordan, Stephen M; Bopage, Rohan; Lloyd, Andrew R; Post, Jeffrey John

2018-01-01

Achievement of the 2030 World Health Organisation (WHO) global hepatitis C virus (HCV) elimination targets will be underpinned by scale-up of testing and use of direct-acting antiviral treatments. In Australia, despite publically-funded testing and treatment, less than 15% of patients were treated in the first year of treatment access, highlighting the need for greater efficiency of health service delivery. To this end, non-invasive fibrosis algorithms were examined to reduce reliance on transient elastography (TE) which is currently utilised for the assessment of cirrhosis in most Australian clinical settings. This retrospective and prospective study, with derivation and validation cohorts, examined consecutive patients in a tertiary referral centre, a sexual health clinic, and a prison-based hepatitis program. The negative predictive value (NPV) of seven non-invasive algorithms were measured using published and newly derived cut-offs. The number of TEs avoided for each algorithm, or combination of algorithms, was determined. The 850 patients included 780 (92%) with HCV mono-infection, and 70 (8%) co-infected with HIV or hepatitis B. The mono-infected cohort included 612 men (79%), with an overall prevalence of cirrhosis of 16% (125/780). An 'APRI' algorithm cut-off of 1.0 had a 94% NPV (95%CI: 91-96%). Newly derived cut-offs of 'APRI' (0.49), 'FIB-4' (0.93) and 'GUCI' (0.5) algorithms each had NPVs of 99% (95%CI: 97-100%), allowing avoidance of TE in 40% (315/780), 40% (310/780) and 40% (298/749) respectively. When used in combination, NPV was retained and TE avoidance reached 54% (405/749), regardless of gender or co-infection. Non-invasive algorithms can reliably exclude cirrhosis in many patients, allowing improved efficiency of HCV assessment services in Australia and worldwide.

7. Vectorization in quantum chemistry

International Nuclear Information System (INIS)

Saunders, V.R.

1987-01-01

It is argued that the optimal vectorization algorithm for many steps (and sub-steps) in a typical ab initio calculation of molecular electronic structure is quite strongly dependent on the target vector machine. Details such as the availability (or lack) of a given vector construct in the hardware, vector startup times and asymptotic rates must all be considered when selecting the optimal algorithm. Illustrations are drawn from: gaussian integral evaluation, fock matrix construction, 4-index transformation of molecular integrals, direct-CI methods, the matrix multiply operation. A cross comparison of practical implementations on the CDC Cyber 205, the Cray-IS and Cray-XMP machines is presented. To achieve portability while remaining optimal on a wide range of machines it is necessary to code all available algorithms in a machine independent manner, and to select the appropriate algorithm using a procedure which is based on machine dependent parameters. Most such parameters concern the timing of certain vector loop kernals, which can usually be derived from a 'bench-marking' routine executed prior to the calculation proper

8. Vectorization at the KENO-IV code

International Nuclear Information System (INIS)

Asai, K.; Higuchi, K.; Katakura, J.

1986-01-01

The multigroup criticality safety code KENO-IV has been vectorized and tested on the FACOM VP-100 vector processor. At first, the vectorized KENO-IV on a scalar processor was slower than the original one by a factor of 1.4 because of the overhead introduced by vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 on the vector processor. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed

9. Selection vector filter framework

Science.gov (United States)

Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

2003-10-01

We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

10. Vector analysis

CERN Document Server

Newell, Homer E

2006-01-01

When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

CERN Document Server

Hoffmann, Banesh

1975-01-01

From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

12. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

Science.gov (United States)

Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

2013-01-01

Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

13. Estimation of Motion Vector Fields

DEFF Research Database (Denmark)

Larsen, Rasmus

1993-01-01

This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

14. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

Science.gov (United States)

Siegel, J.; Siegel, Edward Carl-Ludwig

2011-03-01

Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

15. Replacing penalties

Directory of Open Access Journals (Sweden)

Vitaly Stepashin

2017-01-01

Full Text Available УДК 343.24The subject. The article deals with the problem of the use of "substitute" penalties.The purpose of the article is to identify criminal and legal criteria for: selecting the replacement punishment; proportionality replacement leave punishment to others (the formalization of replacement; actually increasing the punishment (worsening of legal situation of the convicted.Methodology.The author uses the method of analysis and synthesis, formal legal method.Results. Replacing the punishment more severe as a result of malicious evasion from serving accused designated penalty requires the optimization of the following areas: 1 the selection of a substitute punishment; 2 replacement of proportionality is serving a sentence other (formalization of replacement; 3 ensuring the actual toughening penalties (deterioration of the legal status of the convict. It is important that the first two requirements pro-vide savings of repression in the implementation of the replacement of one form of punishment to others.Replacement of punishment on their own do not have any specifics. However, it is necessary to compare them with the contents of the punishment, which the convict from serving maliciously evaded. First, substitute the punishment should assume a more significant range of restrictions and deprivation of certain rights of the convict. Second, the perfor-mance characteristics of order substitute the punishment should assume guarantee imple-mentation of the new measures.With regard to replacing all forms of punishment are set significant limitations in the application that, in some cases, eliminates the possibility of replacement of the sentence, from serving where there has been willful evasion, a stricter measure of state coercion. It is important in the context of the topic and the possibility of a sentence of imprisonment as a substitute punishment in cases where the original purpose of the strict measures excluded. It is noteworthy that the

16. Elementary vectors

CERN Document Server

Wolstenholme, E Œ

1978-01-01

Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

17. Speech Data Compression using Vector Quantization

OpenAIRE

H. B. Kekre; Tanuja K. Sarode

2008-01-01

Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

18. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

Directory of Open Access Journals (Sweden)

Lipu Zhang

2012-01-01

Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

19. Fast Monte Carlo reliability evaluation using support vector machine

International Nuclear Information System (INIS)

Rocco, Claudio M.; Moreno, Jose Ali

2002-01-01

This paper deals with the feasibility of using support vector machine (SVM) to build empirical models for use in reliability evaluation. The approach takes advantage of the speed of SVM in the numerous model calculations typically required to perform a Monte Carlo reliability evaluation. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replace system performance evaluation by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated by several examples. Excellent system reliability results are obtained by training a SVM with a small amount of information

20. Vector analysis

CERN Document Server

Brand, Louis

2006-01-01

The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

1. Knee Replacement

Science.gov (United States)

... days. Medications prescribed by your doctor should help control pain. During the hospital stay, you'll be encouraged to move your ... exercise your new knee. After you leave the hospital, you'll continue physical ... mobility and a better quality of life. And most knee replacements can be ...

2. Vector velocimeter

DEFF Research Database (Denmark)

2012-01-01

The present invention relates to a compact, reliable and low-cost vector velocimeter for example for determining velocities of particles suspended in a gas or fluid flow, or for determining velocity, displacement, rotation, or vibration of a solid surface, the vector velocimeter comprising a laser...

3. Forecasting Caspian Sea level changes using satellite altimetry data (June 1992-December 2013) based on evolutionary support vector regression algorithms and gene expression programming

Science.gov (United States)

Imani, Moslem; You, Rey-Jer; Kuo, Chung-Yen

2014-10-01

Sea level forecasting at various time intervals is of great importance in water supply management. Evolutionary artificial intelligence (AI) approaches have been accepted as an appropriate tool for modeling complex nonlinear phenomena in water bodies. In the study, we investigated the ability of two AI techniques: support vector machine (SVM), which is mathematically well-founded and provides new insights into function approximation, and gene expression programming (GEP), which is used to forecast Caspian Sea level anomalies using satellite altimetry observations from June 1992 to December 2013. SVM demonstrates the best performance in predicting Caspian Sea level anomalies, given the minimum root mean square error (RMSE = 0.035) and maximum coefficient of determination (R2 = 0.96) during the prediction periods. A comparison between the proposed AI approaches and the cascade correlation neural network (CCNN) model also shows the superiority of the GEP and SVM models over the CCNN.

4. Implementation in graphic form of an observability algorithm in energy network using sparse vectors; Implementacao, em ambiente grafico, de um algoritmo de observabilidade em redes de energia utilizando vetores esparsos

Energy Technology Data Exchange (ETDEWEB)

Souza, Claudio Eduardo Scriptori de

1996-02-01

In the Operating Center of Electrical Energy System has been every time more and more important the understanding of the difficulties related to the electrical power behavior. In order to have adequate operation of the system the state estimation process is very important. However before performing the system state estimation owe needs to know if the system is observable otherwise the estimation will no be possible. This work has a main objective, to develop a software that allows one to visualize the whole network in case that network is observable or the observable island of the entire network. As theoretical background the theories and algorithm using the triangular factorization of gain matrix as well as the concepts contained on factorization path developed by Bretas et alli were used. The algorithm developed by him was adapted to the Windows graphical form so that the numerical results of the network observability are shown in the computer screen in graphical form. This algorithm is essentially instead of numerical as the one based on the factorization of gain matrix only. To implement the algorithm previously referred it was used the Borland C++ compiler for windows version 4.0 due to the facilities for sources generation it presents. The results of the tests in the network with 6, 14 and 30 bus leads to: (1) the simplification of observability analysis, using sparse vectors and triangular factorization of the gain matrix; (2) the behavior similarity of the three testes systems with effective clues that the routine developed works well for any systems mainly for systems with bigger quantities of bus and lines; (3) the alternative way of presenting numerical results using the program developed here in graphical forms. (author)

5. Cloning vector

Science.gov (United States)

Guilfoyle, Richard A.; Smith, Lloyd M.

1994-01-01

A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

6. Cloning vector

Science.gov (United States)

Guilfoyle, R.A.; Smith, L.M.

1994-12-27

A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

7. A 3D-Space Vector Modulation Algorithm for Three Phase Four Wire Neutral Point Clamped Inverter Systems as Power Quality Compensator

Directory of Open Access Journals (Sweden)

Palanisamy Ramasamy

2017-11-01

Full Text Available A Unified Power Quality Conditioner (UPQC is designed using a Neutral Point Clamped (NPC multilevel inverter to improve the power quality. When designed for high/medium voltage and power applications, the voltage stress across the switches and harmonic content in the output voltage are increased. A 3-phase 4-wire NPC inverter system is developed as Power Quality Conditioner using an effectual three dimensional Space Vector Modulation (3D-SVM technique. The proposed system behaves like a UPQC with shunt and series active filter under balanced and unbalanced loading conditions. In addition to the improvement of the power quality issues, it also balances the neutral point voltage and voltage balancing across the capacitors under unbalanced condition. The hardware and simulation results of proposed system are compared with 2D-SVM and 3D-SVM. The proposed system is stimulated using MATLAB and the hardware is designed using FPGA. From the results it is evident that effectual 3D-SVM technique gives better performance compared to other control methods.

8. Replacement rod

International Nuclear Information System (INIS)

Hatfield, S.C.

1989-01-01

This patent describes in an elongated replacement rod for use with fuel assemblies of the type having two end fittings connected by guide tubes with a plurality of rod and guide tube cell defining spacer grids containing rod support features and mixing vanes. The grids secured to the guide tubes in register between the end fittings at spaced intervals. The fuel rod comprising: an asymmetrically beveled tip; a shank portion having a straight centerline; and a permanently diverging portion between the tip and the shank portion

9. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

Directory of Open Access Journals (Sweden)

Leonas Jasevičius

2011-03-01

Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

10. Equivalent Vectors

Science.gov (United States)

Levine, Robert

2004-01-01

The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

11. A Novel Neural Network Vector Control for Single-Phase Grid-Connected Converters with L, LC and LCL Filters

Directory of Open Access Journals (Sweden)

Xingang Fu

2016-04-01

Full Text Available This paper investigates a novel recurrent neural network (NN-based vector control approach for single-phase grid-connected converters (GCCs with L (inductor, LC (inductor-capacitor and LCL (inductor-capacitor-inductor filters and provides their comparison study with the conventional standard vector control method. A single neural network controller replaces two current-loop PI controllers, and the NN training approximates the optimal control for the single-phase GCC system. The Levenberg–Marquardt (LM algorithm was used to train the NN controller based on the complete system equations without any decoupling policies. The proposed NN approach can solve the decoupling problem associated with the conventional vector control methods for L, LC and LCL-filter-based single-phase GCCs. Both simulation study and hardware experiments demonstrate that the neural network vector controller shows much more improved performance than that of conventional vector controllers, including faster response speed and lower overshoot. Especially, NN vector control could achieve very good performance using low switch frequency. More importantly, the neural network vector controller is a damping free controller, which is generally required by a conventional vector controller for an LCL-filter-based single-phase grid-connected converter and, therefore, can overcome the inefficiency problem caused by damping policies.

12. Successful vectorization - reactor physics Monte Carlo code

International Nuclear Information System (INIS)

Martin, W.R.

1989-01-01

Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

13. Vector geometry

CERN Document Server

Robinson, Gilbert de B

2011-01-01

This brief undergraduate-level text by a prominent Cambridge-educated mathematician explores the relationship between algebra and geometry. An elementary course in plane geometry is the sole requirement for Gilbert de B. Robinson's text, which is the result of several years of teaching and learning the most effective methods from discussions with students. Topics include lines and planes, determinants and linear equations, matrices, groups and linear transformations, and vectors and vector spaces. Additional subjects range from conics and quadrics to homogeneous coordinates and projective geom

14. Shoulder replacement - discharge

Science.gov (United States)

Total shoulder arthroplasty - discharge; Endoprosthetic shoulder replacement - discharge; Partial shoulder replacement - discharge; Partial shoulder arthroplasty - discharge; Replacement - shoulder - discharge; Arthroplasty - shoulder - discharge

15. VECTOR INTEGRATION

NARCIS (Netherlands)

Thomas, E. G. F.

2012-01-01

This paper deals with the theory of integration of scalar functions with respect to a measure with values in a, not necessarily locally convex, topological vector space. It focuses on the extension of such integrals from bounded measurable functions to the class of integrable functions, proving

16. GPU Accelerated Vector Median Filter

Science.gov (United States)

Aras, Rifat; Shen, Yuzhong

2011-01-01

Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

17. Forecasting of Energy-Related CO2 Emissions in China Based on GM(1,1 and Least Squares Support Vector Machine Optimized by Modified Shuffled Frog Leaping Algorithm for Sustainability

Directory of Open Access Journals (Sweden)

Shuyu Dai

2018-03-01

Full Text Available Presently, China is the largest CO2 emitting country in the world, which accounts for 28% of the CO2 emissions globally. China’s CO2 emission reduction has a direct impact on global trends. Therefore, accurate forecasting of CO2 emissions is crucial to China’s emission reduction policy formulating and global action on climate change. In order to forecast the CO2 emissions in China accurately, considering population, the CO2 emission forecasting model using GM(1,1 (Grey Model and least squares support vector machine (LSSVM optimized by the modified shuffled frog leaping algorithm (MSFLA (MSFLA-LSSVM is put forward in this paper. First of all, considering population, per capita GDP, urbanization rate, industrial structure, energy consumption structure, energy intensity, total coal consumption, carbon emission intensity, total imports and exports and other influencing factors of CO2 emissions, the main driving factors are screened according to the sorting of grey correlation degrees to realize feature dimension reduction. Then, the GM(1,1 model is used to forecast the main influencing factors of CO2 emissions. Finally, taking the forecasting value of the CO2 emissions influencing factors as the model input, the MSFLA-LSSVM model is adopted to forecast the CO2 emissions in China from 2018 to 2025.

18. Hip joint replacement

Science.gov (United States)

Hip arthroplasty; Total hip replacement; Hip hemiarthroplasty; Arthritis - hip replacement; Osteoarthritis - hip replacement ... Your hip joint is made up of 2 major parts. One or both parts may be replaced during surgery: ...

19. An introduction to vectors, vector operators and vector analysis

CERN Document Server

Joag, Pramod S

2016-01-01

Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

20. Virtual Vector Machine for Bayesian Online Classification

OpenAIRE

Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

2012-01-01

In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

1. Maxwell's Multipole Vectors and the CMB

OpenAIRE

Weeks, Jeffrey R.

2004-01-01

The recently re-discovered multipole vector approach to understanding the harmonic decomposition of the cosmic microwave background traces its roots to Maxwell's Treatise on Electricity and Magnetism. Taking Maxwell's directional derivative approach as a starting point, the present article develops a fast algorithm for computing multipole vectors, with an exposition that is both simpler and better motivated than in the author's previous work. Tests show the resulting algorithm, coded up as a ...

2. Covariant Lyapunov vectors

International Nuclear Information System (INIS)

Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto

2013-01-01

Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)

3. Algorithms for parallel and vector computations

Science.gov (United States)

Ortega, James M.

1995-01-01

This is a final report on work performed under NASA grant NAG-1-1112-FOP during the period March, 1990 through February 1995. Four major topics are covered: (1) solution of nonlinear poisson-type equations; (2) parallel reduced system conjugate gradient method; (3) orderings for conjugate gradient preconditioners, and (4) SOR as a preconditioner.

4. Interior point decoding for linear vector channels

International Nuclear Information System (INIS)

2008-01-01

In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

5. Robust Pseudo-Hierarchical Support Vector Clustering

DEFF Research Database (Denmark)

Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

2007-01-01

Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

6. Interior point decoding for linear vector channels

Energy Technology Data Exchange (ETDEWEB)

Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

2008-01-15

In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

7. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

Directory of Open Access Journals (Sweden)

Hang-cheong Wong

2012-01-01

Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

8. Vectorization of KENO IV code and an estimate of vector-parallel processing

International Nuclear Information System (INIS)

Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

1986-10-01

The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

9. On the Vectorization of FIR Filterbanks

Directory of Open Access Journals (Sweden)

Barbedo Jayme Garcia Arnal

2007-01-01

Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and , in order to explore different aspects of the proposed technique.

10. On the Vectorization of FIR Filterbanks

Directory of Open Access Journals (Sweden)

Amauri Lopes

2007-01-01

Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and C, in order to explore different aspects of the proposed technique.

11. Antenna Controller Replacement Software

Science.gov (United States)

Chao, Roger Y.; Morgan, Scott C.; Strain, Martha M.; Rockwell, Stephen T.; Shimizu, Kenneth J.; Tehrani, Barzia J.; Kwok, Jaclyn H.; Tuazon-Wong, Michelle; Valtier, Henry; Nalbandi, Reza;

2010-01-01

The Antenna Controller Replacement (ACR) software accurately points and monitors the Deep Space Network (DSN) 70-m and 34-m high-efficiency (HEF) ground-based antennas that are used to track primarily spacecraft and, periodically, celestial targets. To track a spacecraft, or other targets, the antenna must be accurately pointed at the spacecraft, which can be very far away with very weak signals. ACR s conical scanning capability collects the signal in a circular pattern around the target, calculates the location of the strongest signal, and adjusts the antenna pointing to point directly at the spacecraft. A real-time, closed-loop servo control algorithm performed every 0.02 second allows accurate positioning of the antenna in order to track these distant spacecraft. Additionally, this advanced servo control algorithm provides better antenna pointing performance in windy conditions. The ACR software provides high-level commands that provide a very easy user interface for the DSN operator. The operator only needs to enter two commands to start the antenna and subreflector, and Master Equatorial tracking. The most accurate antenna pointing is accomplished by aligning the antenna to the Master Equatorial, which because of its small size and sheltered location, has the most stable pointing. The antenna has hundreds of digital and analog monitor points. The ACR software provides compact displays to summarize the status of the antenna, subreflector, and the Master Equatorial. The ACR software has two major functions. First, it performs all of the steps required to accurately point the antenna (and subreflector and Master Equatorial) at the spacecraft (or celestial target). This involves controlling the antenna/ subreflector/Master-Equatorial hardware, initiating and monitoring the correct sequence of operations, calculating the position of the spacecraft relative to the antenna, executing the real-time servo control algorithm to maintain the correct position, and

12. Science.gov (United States)

... gov/ency/article/007684.htm Transcatheter aortic valve replacement To use the sharing features on this page, please enable JavaScript. Transcatheter aortic valve replacement (TAVR) is surgery to replace the aortic valve. ...

13. Hip Replacement Surgery

Science.gov (United States)

... Outreach Initiative Breadcrumb Home Health Topics English Español Hip Replacement Surgery Basics In-Depth Download Download EPUB ... PDF What is it? Points To Remember About Hip Replacement Surgery Hip replacement surgery removes damaged or ...

14. Nicotine replacement therapy

Science.gov (United States)

Smoking cessation - nicotine replacement; Tobacco - nicotine replacement therapy ... Before you start using a nicotine replacement product, here are some things to know: The more cigarettes you smoke, the higher the dose you may need to ...

15. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

Directory of Open Access Journals (Sweden)

Shuangshuang Chen

2018-02-01

Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

16. Vector condensate model of electroweak interactions

International Nuclear Information System (INIS)

Cynolter, G.; Pocsik, G.

1997-01-01

Motivated by the fact that the Higgs is not seen, a new version of the standard model is proposed where the scalar doublet is replaced by a vector doublet and its neutral member forms a nonvanishing condensate. Gauge fields are coupled to the new vector fields B in a gauge invariant way leading to mass terms for the gauge fields by condensation. The model is presented and some implications are discussed. (K.A.)

17. Defining line replaceable units

NARCIS (Netherlands)

Parada Puig, J. E.; Basten, R. J I

2015-01-01

Defective capital assets may be quickly restored to their operational condition by replacing the item that has failed. The item that is replaced is called the Line Replaceable Unit (LRU), and the so-called LRU definition problem is the problem of deciding on which item to replace upon each type of

18. Sorting on STAR. [CDC computer algorithm timing comparison

Science.gov (United States)

Stone, H. S.

1978-01-01

Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

19. A study of biorthogonal multiple vector-valued wavelets

International Nuclear Information System (INIS)

Han Jincang; Cheng Zhengxing; Chen Qingjiang

2009-01-01

The notion of vector-valued multiresolution analysis is introduced and the concept of biorthogonal multiple vector-valued wavelets which are wavelets for vector fields, is introduced. It is proved that, like in the scalar and multiwavelet case, the existence of a pair of biorthogonal multiple vector-valued scaling functions guarantees the existence of a pair of biorthogonal multiple vector-valued wavelet functions. An algorithm for constructing a class of compactly supported biorthogonal multiple vector-valued wavelets is presented. Their properties are investigated by means of operator theory and algebra theory and time-frequency analysis method. Several biorthogonality formulas regarding these wavelet packets are obtained.

20. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

Science.gov (United States)

Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

2017-09-01

This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

1. Horizontal vectorization of electron repulsion integrals.

Science.gov (United States)

Pritchard, Benjamin P; Chow, Edmond

2016-10-30

We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD  = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

2. Ankle replacement - discharge

Science.gov (United States)

... total - discharge; Total ankle arthroplasty - discharge; Endoprosthetic ankle replacement - discharge; Osteoarthritis - ankle ... You had an ankle replacement. Your surgeon removed and reshaped ... an artificial ankle joint. You received pain medicine and were ...

3. Artificial Disc Replacement

Science.gov (United States)

... Spondylolisthesis BLOG FIND A SPECIALIST Treatments Artificial Disc Replacement (ADR) Patient Education Committee Jamie Baisden The disc ... Disc An artificial disc (also called a disc replacement, disc prosthesis or spine arthroplasty device) is a ...

4. Partial knee replacement - slideshow

Science.gov (United States)

... page: //medlineplus.gov/ency/presentations/100225.htm Partial knee replacement - series—Normal anatomy To use the sharing ... A.M. Editorial team. Related MedlinePlus Health Topics Knee Replacement A.D.A.M., Inc. is accredited ...

International Nuclear Information System (INIS)

Smetters, J.L.

1987-01-01

This paper discusses flued head replacement options. Section 2 discusses complete flued head replacement with a design that eliminates the inaccessible welds. Section 3 discusses alternate flued head support designs that can drastically reduce flued head installation costs. Section 4 describes partial flued head replacement designs. Finally, Section 5 discusses flued head analysis methods. (orig./GL)

6. Capital Equipment Replacement Decisions

OpenAIRE

Batterham, Robert L.; Fraser, K.I.

1995-01-01

This paper reviews the literature on the optimal replacement of capital equipment, especially farm machinery. It also considers the influence of taxation and capital rationing on replacement decisions. It concludes that special taxation provisions such as accelerated depreciation and investment allowances are unlikely to greatly influence farmers' capital equipment replacement decisions in Australia.

7. Implementing Replacement Cost Accounting

Science.gov (United States)

1976-12-01

cost accounting Clickener, John Ross Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/17810 Downloaded from NPS Archive...Calhoun IMPLEMENTING REPLACEMENT COST ACCOUNTING John Ross CHckener NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS IMPLEMENTING REPLACEMENT COST ...Implementing Replacement Cost Accounting 7. AUTHORS John Ross Clickener READ INSTRUCTIONS BEFORE COMPLETING FORM 3. RECIPIENT’S CATALOG NUMBER 9. TYRE OF

8. Vector Boson Scattering at High Mass

CERN Document Server

The ATLAS collaboration

2009-01-01

In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW$scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

9. Quick fuzzy backpropagation algorithm.

Science.gov (United States)

Nikov, A; Stoeva, S

2001-03-01

A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

10. Vector-Quantization using Information Theoretic Concepts

DEFF Research Database (Denmark)

Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

2005-01-01

interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

11. Algorithming the Algorithm

DEFF Research Database (Denmark)

Mahnke, Martina; Uprichard, Emma

2014-01-01

Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

12. Algorithms for parallel computers

International Nuclear Information System (INIS)

Churchhouse, R.F.

1985-01-01

Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

13. Vector regression introduced

Directory of Open Access Journals (Sweden)

Mok Tik

2014-06-01

Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

14. Great Ellipse Route Planning Based on Space Vector

Directory of Open Access Journals (Sweden)

LIU Wenchao

2015-07-01

Full Text Available Aiming at the problem of navigation error caused by unified earth model in great circle route planning using sphere model and modern navigation equipment using ellipsoid mode, a method of great ellipse route planning based on space vector is studied. By using space vector algebra method, the vertex of great ellipse is solved directly, and description of great ellipse based on major-axis vector and minor-axis vector is presented. Then calculation formulas of great ellipse azimuth and distance are deduced using two basic vectors. Finally, algorithms of great ellipse route planning are studied, especially equal distance route planning algorithm based on Newton-Raphson(N-R method. Comparative examples show that the difference of route planning between great circle and great ellipse is significant, using algorithms of great ellipse route planning can eliminate the navigation error caused by the great circle route planning, and effectively improve the accuracy of navigation calculation.

15. Automatic inspection of textured surfaces by support vector machines

Science.gov (United States)

Jahanbin, Sina; Bovik, Alan C.; Pérez, Eduardo; Nair, Dinesh

2009-08-01

Automatic inspection of manufactured products with natural looking textures is a challenging task. Products such as tiles, textile, leather, and lumber project image textures that cannot be modeled as periodic or otherwise regular; therefore, a stochastic modeling of local intensity distribution is required. An inspection system to replace human inspectors should be flexible in detecting flaws such as scratches, cracks, and stains occurring in various shapes and sizes that have never been seen before. A computer vision algorithm is proposed in this paper that extracts local statistical features from grey-level texture images decomposed with wavelet frames into subbands of various orientations and scales. The local features extracted are second order statistics derived from grey-level co-occurrence matrices. Subsequently, a support vector machine (SVM) classifier is trained to learn a general description of normal texture from defect-free samples. This algorithm is implemented in LabVIEW and is capable of processing natural texture images in real-time.

16. Aeronautical Information System Replacement -

Data.gov (United States)

Department of Transportation — Aeronautical Information System Replacement is a web-enabled, automation means for the collection and distribution of Service B messages, weather information, flight...

17. Infinite ensemble of support vector machines for prediction of ...

African Journals Online (AJOL)

user

the support vector machines (SVMs), a machine learning algorithm used ... work designs so that specific, quantitative workplace assessments can be made ... with SVMs can be obtained by embedding the base learners (hypothesis) into a.

18. Vector and parallel processors in computational science

International Nuclear Information System (INIS)

Duff, I.S.; Reid, J.K.

1985-01-01

These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

19. Vector and parallel processors in computational science

International Nuclear Information System (INIS)

Duff, I.S.; Reid, J.K.

1985-01-01

This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

20. Null vectors in superconformal quantum field theory

International Nuclear Information System (INIS)

Huang Chaoshang

1993-01-01

The superspace formulation of the N=1 superconformal field theory and superconformal Ward identities are used to give a precise definition of fusion. Using the fusion procedure, superconformally covariant differential equations are derived and consequently a complete and straightforward algorithm for finding null vectors in Verma modules of the Neveu-Schwarz algebra is given. (orig.)

1. VectorBase

Data.gov (United States)

U.S. Department of Health & Human Services — VectorBase is a Bioinformatics Resource Center for invertebrate vectors. It is one of four Bioinformatics Resource Centers funded by NIAID to provide web-based...

2. Isomorphism Theorem on Vector Spaces over a Ring

Directory of Open Access Journals (Sweden)

Futa Yuichi

2017-10-01

Full Text Available In this article, we formalize in the Mizar system [1, 4] some properties of vector spaces over a ring. We formally prove the first isomorphism theorem of vector spaces over a ring. We also formalize the product space of vector spaces. ℤ-modules are useful for lattice problems such as LLL (Lenstra, Lenstra and Lovász [5] base reduction algorithm and cryptographic systems [6, 2].

3. Generalization of concurrence vectors

International Nuclear Information System (INIS)

Yu Changshui; Song Heshan

2004-01-01

In this Letter, based on the generalization of concurrence vectors for bipartite pure state with respect to employing tensor product of generators of the corresponding rotation groups, we generalize concurrence vectors to the case of mixed states; a new criterion of separability of multipartite pure states is given out, for which we define a concurrence vector; we generalize the vector to the case of multipartite mixed state and give out a good measure of free entanglement

Energy Technology Data Exchange (ETDEWEB)

Griffin, Jeffrey W.; Moran, Traci L.; Bond, Leonard J.

2010-12-01

This report summarizes a Radiation Source Replacement Workshop in Houston Texas on October 27-28, 2010, which provided a forum for industry and researchers to exchange information and to discuss the issues relating to replacement of AmBe, and potentially other isotope sources used in well logging.

5. Convexity and Marginal Vectors

NARCIS (Netherlands)

van Velzen, S.; Hamers, H.J.M.; Norde, H.W.

2002-01-01

In this paper we construct sets of marginal vectors of a TU game with the property that if the marginal vectors from these sets are core elements, then the game is convex.This approach leads to new upperbounds on the number of marginal vectors needed to characterize convexity.An other result is that

6. Custodial vector model

DEFF Research Database (Denmark)

Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan

2015-01-01

We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a $SU(2)_L\\times SU(2)_R$ spectral global symmetry. This symmetry partially protects the electroweak S-parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum...

7. Pattern recognition with vector hits

International Nuclear Information System (INIS)

Frühwirth, R

2012-01-01

Trackers at the future high-luminosity LHC, designed to have triggering capability, will feature layers of stacked modules with a small stack separation. This will allow the reconstruction of track stubs or vector hits with position and direction information, but lacking precise curvature information. This opens up new possibilities for track finding, online and offline. Two track finding methods, the Kalman filter and the convergent Hough transform are studied in this context. Results from a simplified fast simulation are presented. It is shown that the performance of the methods depends to a large extent on the size of the stack separation. We conclude that the detector design and the choice of the track finding algorithm(s) are strongly coupled and should proceed conjointly.

8. Vectorization of Monte Carlo particle transport

International Nuclear Information System (INIS)

Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.

1989-01-01

This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP

9. Introduction to Vector Field Visualization

Science.gov (United States)

Kao, David; Shen, Han-Wei

2010-01-01

Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

10. Sound algorithms

OpenAIRE

De Götzen , Amalia; Mion , Luca; Tache , Olivier

2007-01-01

International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

11. Genetic algorithms

Science.gov (United States)

Wang, Lui; Bayer, Steven E.

1991-01-01

Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

12. Rotations with Rodrigues' vector

International Nuclear Information System (INIS)

Pina, E

2011-01-01

The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears to be a fundamental matrix that is used to express the components of the angular velocity, the rotation matrix and the angular momentum vector. The Hamiltonian formalism of rotational dynamics in terms of this vector uses the same matrix. The quantization of the rotational dynamics is performed with simple rules if one uses Rodrigues' vector and similar formal expressions for the quantum operators that mimic the Hamiltonian classical dynamics.

13. Could wind replace nuclear?

International Nuclear Information System (INIS)

2017-01-01

This article aims at assessing the situation produced by a total replacement of nuclear energy by wind energy, while facing consumption demand at any moment, notably in December. The authors indicate the evolution of the French energy mix during December 2016, and the evolution of the rate between wind energy production and the sum of nuclear and wind energy production during the same month, and then give briefly some elements regarding necessary investments in wind energy to wholly replace nuclear energy. According to them, such a replacement would be ruinous

14. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

Science.gov (United States)

Young, Frederic; Siegel, Edward

Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

15. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

Science.gov (United States)

Pavlov, V. M.

2017-07-01

The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

16. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

Science.gov (United States)

Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

1996-01-01

We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

17. Higher-order force gradient symplectic algorithms

Science.gov (United States)

Chin, Siu A.; Kidwell, Donald W.

2000-12-01

We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

18. Slab replacement maturity guidelines.

Science.gov (United States)

2014-04-01

This study investigated the use of maturity method to determine early age strength of concrete in slab : replacement application. Specific objectives were (1) to evaluate effects of various factors on the compressive : maturity-strength relationship ...

19. Partial knee replacement

Science.gov (United States)

... good range of motion in your knee. The ligaments in your knee are stable. However, most people with knee arthritis have a surgery called a total knee arthroplasty (TKA). Knee replacement is most often done in people age 60 ...

20. Carbohydrates as Fat Replacers.

Science.gov (United States)

Peng, Xingyun; Yao, Yuan

2017-02-28

The overconsumption of dietary fat contributes to various chronic diseases, which encourages attempts to develop and consume low-fat foods. Simple fat reduction causes quality losses that impede the acceptance of foods. Fat replacers are utilized to minimize the quality deterioration after fat reduction or removal to achieve low-calorie, low-fat claims. In this review, the forms of fats and their functions in contributing to food textural and sensory qualities are discussed in various food systems. The connections between fat reduction and quality loss are described in order to clarify the rationales of fat replacement. Carbohydrate fat replacers usually have low calorie density and provide gelling, thickening, stabilizing, and other texture-modifying properties. In this review, carbohydrates, including starches, maltodextrins, polydextrose, gums, and fibers, are discussed with regard to their interactions with other components in foods as well as their performances as fat replacers in various systems.

1. Hip joint replacement - slideshow

Science.gov (United States)

... this page: //medlineplus.gov/ency/presentations/100006.htm Hip joint replacement - series—Normal anatomy To use the ... to slide 5 out of 5 Overview The hip joint is made up of two major parts: ...

2. Tool Inventory and Replacement

Science.gov (United States)

Bear, W. Forrest

1976-01-01

Vocational agriculture teachers are encouraged to evaluate curriculum offerings, the new trends in business and industry, and develop a master tool purchase and replacement plan over a 3- to 5-year period. (HD)

3. Knee joint replacement

Science.gov (United States)

... to make everyday tasks easier. Practice using a cane, walker , crutches , or a wheelchair correctly. On the ... ask your doctor Knee joint replacement - discharge Preventing falls Preventing falls - what to ask your doctor Surgical ...

4. Product Platform Replacements

DEFF Research Database (Denmark)

Sköld, Martin; Karlsson, Christer

2012-01-01

. To shed light on this unexplored and growing managerial concern, the purpose of this explorative study is to identify operational challenges to management when product platforms are replaced. Design/methodology/approach – The study uses a longitudinal field-study approach. Two companies, Gamma and Omega...... replacement was chosen in each company. Findings – The study shows that platform replacements primarily challenge managers' existing knowledge about platform architectures. A distinction can be made between “width” and “height” in platform replacements, and it is crucial that managers observe this in order...... to challenge their existing knowledge about platform architectures. Issues on technologies, architectures, components and processes as well as on segments, applications and functions are identified. Practical implications – Practical implications are summarized and discussed in relation to a framework...

5. The replacement research reactor

International Nuclear Information System (INIS)

Cameron, R.

1999-01-01

As a consequences of the government decision in September 1997. ANSTO established a replacement research reactor project to manage the procurement of the replacement reactor through the necessary approval, tendering and contract management stages This paper provides an update of the status of the project including the completion of the Environmental Impact Statement. Prequalification and Public Works Committee processes. The aims of the project, management organisation, reactor type and expected capabilities are also described

6. Document Organization Using Kohonen's Algorithm.

Science.gov (United States)

Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

2002-01-01

Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

7. Algorithmic cryptanalysis

CERN Document Server

Joux, Antoine

2009-01-01

Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

8. Line Width Recovery after Vectorization of Engineering Drawings

Directory of Open Access Journals (Sweden)

Gramblička Matúš

2016-12-01

Full Text Available Vectorization is the conversion process of a raster image representation into a vector representation. The contemporary commercial vectorization software applications do not provide sufficiently high quality outputs for such images as do mechanical engineering drawings. Line width preservation is one of the problems. There are applications which need to know the line width after vectorization because this line attribute carries the important semantic information for the next 3D model generation. This article describes the algorithm that is able to recover line width of individual lines in the vectorized engineering drawings. Two approaches are proposed, one examines the line width at three points, whereas the second uses a variable number of points depending on the line length. The algorithm is tested on real mechanical engineering drawings.

9. Supergravity inspired vector curvaton

International Nuclear Information System (INIS)

Dimopoulos, Konstantinos

2007-01-01

It is investigated whether a massive Abelian vector field, whose gauge kinetic function is growing during inflation, can be responsible for the generation of the curvature perturbation in the Universe. Particle production is studied and it is shown that the vector field can obtain a scale-invariant superhorizon spectrum of perturbations with a reasonable choice of kinetic function. After inflation the vector field begins coherent oscillations, during which it corresponds to pressureless isotropic matter. When the vector field dominates the Universe, its perturbations give rise to the observed curvature perturbation following the curvaton scenario. It is found that this is possible if, after the end of inflation, the mass of the vector field increases at a phase transition at temperature of order 1 TeV or lower. Inhomogeneous reheating, whereby the vector field modulates the decay rate of the inflaton, is also studied

10. Custodial vector model

Science.gov (United States)

Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan; Frandsen, Mads T.; Hapola, Tuomas; Sannino, Francesco

2015-07-01

We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a S U (2 )L×S U (2 )R spectral global symmetry. This symmetry partially protects the electroweak S parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum and interactions with the standard model fields lead to distinct signatures at the LHC in the diboson, dilepton, and associated Higgs channels.

11. Vector Differential Calculus

OpenAIRE

HITZER, Eckhard MS

2002-01-01

This paper treats the fundamentals of the vector differential calculus part of universal geometric calculus. Geometric calculus simplifies and unifies the structure and notation of mathematics for all of science and engineering, and for technological applications. In order to make the treatment self-contained, I first compile all important geometric algebra relationships,which are necesssary for vector differential calculus. Then differentiation by vectors is introduced and a host of major ve...

12. Implicit Real Vector Automata

Directory of Open Access Journals (Sweden)

Jean-François Degbomont

2010-10-01

Full Text Available This paper addresses the symbolic representation of non-convex real polyhedra, i.e., sets of real vectors satisfying arbitrary Boolean combinations of linear constraints. We develop an original data structure for representing such sets, based on an implicit and concise encoding of a known structure, the Real Vector Automaton. The resulting formalism provides a canonical representation of polyhedra, is closed under Boolean operators, and admits an efficient decision procedure for testing the membership of a vector.

13. Vectors and their applications

CERN Document Server

Pettofrezzo, Anthony J

2005-01-01

Geared toward undergraduate students, this text illustrates the use of vectors as a mathematical tool in plane synthetic geometry, plane and spherical trigonometry, and analytic geometry of two- and three-dimensional space. Its rigorous development includes a complete treatment of the algebra of vectors in the first two chapters.Among the text's outstanding features are numbered definitions and theorems in the development of vector algebra, which appear in italics for easy reference. Most of the theorems include proofs, and coordinate position vectors receive an in-depth treatment. Key concept

14. Symbolic computer vector analysis

Science.gov (United States)

Stoutemyer, D. R.

1977-01-01

A MACSYMA program is described which performs symbolic vector algebra and vector calculus. The program can combine and simplify symbolic expressions including dot products and cross products, together with the gradient, divergence, curl, and Laplacian operators. The distribution of these operators over sums or products is under user control, as are various other expansions, including expansion into components in any specific orthogonal coordinate system. There is also a capability for deriving the scalar or vector potential of a vector field. Examples include derivation of the partial differential equations describing fluid flow and magnetohydrodynamics, for 12 different classic orthogonal curvilinear coordinate systems.

15. The vectorized pinball contact impact routine

International Nuclear Information System (INIS)

Belytschko, T.B.; Neal, M.O.

1989-01-01

When simulating the impact-penetration of two bodies with explicit finite element methods, some type of interaction or contact algorithm must be included. These algorithms, often called slideline algorithms, must enforce the constraint that the two bodies cannot occupy the same space at the same time. Lagrange multiplier, penalty, and projection techniques have all been proposed to enforce this added constraint. For problems which include large relative motions between the two bodies and erosion of elements, it becomes difficult and time consuming to keep track of which elements of the bodies should be involved in the impact calculations. This computational expense is magnified by the fact that these slideline algorithms have many branches which are not amenable to vectorization. In dynamic finite element simulations with explicit time integration, many of the element and nodal calculations can be vectorized and the slideline calculations can require a considerable percentage of the total computation time. The thrust of the pinball algorithm discussed in this paper is to allow vectorization of as much of the slideline calculations as possible. This is accomplished by greatly simplifying both the search for the elements involved in the impact and in the enforcement of impenetrability with the use of spheres, or pinballs, for each element in the slideline calculations. In this way, the search requires a simple check on the distances between elements to determine if contact has been made. Once the contacting pairs of elements have been determined with a single global search of the two slidelines, the impenetrability condition is enforced with the use of a penalty type formulation which can be completely vectorized

16. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

Science.gov (United States)

2013-01-01

The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

17. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

Directory of Open Access Journals (Sweden)

Kian Sheng Lim

2013-01-01

Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

18. Algorithmic mathematics

CERN Document Server

Hougardy, Stefan

2016-01-01

Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

19. A quick survey of text categorization algorithms

Directory of Open Access Journals (Sweden)

Dan MUNTEANU

2007-12-01

Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

20. Lyapunov Function Synthesis - Algorithm and Software

DEFF Research Database (Denmark)

Leth, Tobias; Sloth, Christoffer; Wisniewski, Rafal

2016-01-01

In this paper we introduce an algorithm for the synthesis of polynomial Lyapunov functions for polynomial vector fields. The Lyapunov function is a continuous piecewisepolynomial defined on simplices, which compose a collection of simplices. The algorithm is elaborated and crucial features are ex...

1. Total algorithms

NARCIS (Netherlands)

Tel, G.

We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

2. Sparse Vector Distributions and Recovery from Compressed Sensing

DEFF Research Database (Denmark)

Sturm, Bob L.

It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

3. Improved autonomous star identification algorithm

International Nuclear Information System (INIS)

Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

2015-01-01

The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

4. Progressive Classification Using Support Vector Machines

Science.gov (United States)

Wagstaff, Kiri; Kocurek, Michael

2009-01-01

An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

5. Preliminary study on helical CT algorithms for patient motion estimation and compensation

International Nuclear Information System (INIS)

Wang, G.; Vannier, M.W.

1995-01-01

Helical computed tomography (helical/spiral CT) has replaced conventional CT in many clinical applications. In current helical CT, a patient is assumed to be rigid and motionless during scanning and planar projection sets are produced from raw data via longitudinal interpolation. However, rigid patient motion is a problem in some cases (such as in the skull base and temporal bone imaging). Motion artifacts thus generated in reconstructed images can prevent accurate diagnosis. Modeling a uniform translational movement, the authors address how patient motion is ascertained and how it may be compensated. First, mismatch between adjacent fan-beam projections of the same orientation is determined via classical correlation, which is approximately proportional to the patient displacement projected onto an axis orthogonal to the central ray of the involved fan-beam. Then, the patient motion vector (the patient displacement per gantry rotation) is estimated from its projections using a least-square-root method. To suppress motion artifacts, adaptive interpolation algorithms are developed that synthesize full-scan and half-scan planar projection data sets, respectively. In the adaptive scheme, the interpolation is performed along inclined paths dependent upon the patient motion vector. The simulation results show that the patient motion vector can be accurately and reliably estimated using their correlation and least-square-root algorithm, patient motion artifacts can be effectively suppressed via adaptive interpolation, and adaptive half-scan interpolation is advantageous compared with its full-scale counterpart in terms of high contrast image resolution

6. Vector-Vector Scattering on the Lattice

Science.gov (United States)

Romero-López, Fernando; Urbach, Carsten; Rusetsky, Akaki

2018-03-01

In this work we present an extension of the LüScher formalism to include the interaction of particles with spin, focusing on the scattering of two vector particles. The derived formalism will be applied to Scalar QED in the Higgs Phase, where the U(1) gauge boson acquires mass.

7. Brane vector phenomenology

International Nuclear Information System (INIS)

Clark, T.E.; Love, S.T.; Nitta, Muneto; Veldhuis, T. ter; Xiong, C.

2009-01-01

Local oscillations of the brane world are manifested as massive vector fields. Their coupling to the Standard Model can be obtained using the method of nonlinear realizations of the spontaneously broken higher-dimensional space-time symmetries, and to an extent, are model independent. Phenomenological limits on these vector field parameters are obtained using LEP collider data and dark matter constraints

8. ALGORITHMS FOR TETRAHEDRAL NETWORK (TEN) GENERATION

Institute of Scientific and Technical Information of China (English)

2000-01-01

The Tetrahedral Network(TEN) is a powerful 3-D vector structure in GIS, which has a lot of advantages such as simple structure, fast topological relation processing and rapid visualization. The difficulty of TEN application is automatic creating data structure. Al though a raster algorithm has been introduced by some authors, the problems in accuracy, memory requirement, speed and integrity are still existent. In this paper, the raster algorithm is completed and a vector algorithm is presented after a 3-D data model and structure of TEN have been introducted. Finally, experiment, conclusion and future work are discussed.

9. Efficient Multiplicative Updates for Support Vector Machines

DEFF Research Database (Denmark)

Potluru, Vamsi K.; Plis, Sergie N; Mørup, Morten

2009-01-01

(NMF) problem. This allows us to derive a novel multiplicative algorithm for solving hard and soft margin SVM. The algorithm follows as a natural extension of the updates for NMF and semi-NMF. No additional parameter setting, such as choosing learning rate, is required. Exploiting the connection......The dual formulation of the support vector machine (SVM) objective function is an instance of a nonnegative quadratic programming problem. We reformulate the SVM objective function as a matrix factorization problem which establishes a connection with the regularized nonnegative matrix factorization...... between SVM and NMF formulation, we show how NMF algorithms can be applied to the SVM problem. Multiplicative updates that we derive for SVM problem also represent novel updates for semi-NMF. Further this unified view yields algorithmic insights in both directions: we demonstrate that the Kernel Adatron...

10. Complex Polynomial Vector Fields

DEFF Research Database (Denmark)

The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...... or meromorphic (allowing poles as singularities) functions. There already exists a well-developed theory for iterative holomorphic dynamical systems, and successful relations found between iteration theory and flows of vector fields have been one of the main motivations for the recent interest in holomorphic...... vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...

11. Complex Polynomial Vector Fields

DEFF Research Database (Denmark)

Dias, Kealey

vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...... of parameter spaces into structurally stable domains, and a description of the bifurcations. For this reason, the talk will focus on these questions for complex polynomial vector fields.......The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...

12. Prioritizing equipment for replacement.

Science.gov (United States)

Capuano, Mike

2010-01-01

It is suggested that clinical engineers take the lead in formulating evaluation processes to recommend equipment replacement. Their skill, knowledge, and experience, combined with access to equipment databases, make them a logical choice. Based on ideas from Fennigkoh's scheme, elements such as age, vendor support, accumulated maintenance cost, and function/risk were used.6 Other more subjective criteria such as cost benefits and efficacy of newer technology were not used. The element of downtime was also omitted due to the data element not being available. The resulting Periop Master Equipment List and its rationale was presented to the Perioperative Services Program Council. They deemed the criteria to be robust and provided overwhelming acceptance of the list. It was quickly put to use to estimate required capital funding, justify items already thought to need replacement, and identify high-priority ranked items for replacement. Incorporating prioritization criteria into an existing equipment database would be ideal. Some commercially available systems do have the basic elements of this. Maintaining replacement data can be labor-intensive regardless of the method used. There is usually little time to perform the tasks necessary for prioritizing equipment. However, where appropriate, a clinical engineering department might be able to conduct such an exercise as shown in the following case study.

13. Thyroid hormone replacement therapy

NARCIS (Netherlands)

Wiersinga, W. M.

2001-01-01

Thyroid hormone replacement has been used for more than 100 years in the treatment of hypothyroidism, and there is no doubt about its overall efficacy. Desiccated thyroid contains both thyroxine (T(4)) and triiodothyronine (T(3)); serum T(3) frequently rises to supranormal values in the absorption

14. Can photovoltaic replace nuclear?

International Nuclear Information System (INIS)

2017-01-01

As the French law on energy transition for a green growth predicts that one third of nuclear energy production is to be replaced by renewable energies (wind and solar) by 2025, and while the ADEME proposes a 100 per cent renewable scenario for 2050, this paper proposes a brief analysis of the replacement of nuclear energy by solar photovoltaic energy. It presents and discusses some characteristics of photovoltaic production: production level during a typical day for each month (a noticeable lower production in December), evolution of monthly production during a year, evolution of the rate between nuclear and photovoltaic production. A cost assessment is then proposed for energy storage and for energy production, and a minimum cost of replacement of nuclear by photovoltaic is assessed. The seasonal effect is outlined, as well as the latitude effect. Finally, the authors outline the huge cost of such a replacement, and consider that public support to new photovoltaic installations without an at least daily storage mean should be cancelled

15. Fluorescent Lamp Replacement Study

Science.gov (United States)

2017-07-01

not be cited for purposes of advertisement. DISPOSITION INSTRUCTIONS: Destroy this document when no longer needed. Do not return to the... recycling , and can be disposed safely in a landfill. (2) LEDs offer reduced maintenance costs and fewer bulb replacements, significantly reducing... recycling . Several fixtures, ballasts and energy efficient fluorescent bulbs that were determined to be in pristine condition were returned to ATC

16. Replacing Recipe Realism

OpenAIRE

Saatsi, J

2017-01-01

Many realist writings exemplify the spirit of ‘recipe realism’. Here I characterise recipe realism, challenge it, and propose replacing it with ‘exemplar realism’. This alternative understanding of realism is more piecemeal, robust, and better in tune with scientists’ own attitude towards their best theories, and thus to be preferred.

17. Integrating Transgenic Vector Manipulation with Clinical Interventions to Manage Vector-Borne Diseases.

Directory of Open Access Journals (Sweden)

Kenichi W Okamoto

2016-03-01

Full Text Available Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted. We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences-consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even

18. THE REPLACEMENT-RENEWAL OF INDUSTRIAL EQUIPMENTS. THE MAPI FORMULAS

Directory of Open Access Journals (Sweden)

Meo Colombo Carlotta

2010-07-01

Full Text Available Since the production has been found to be an economical means for satisfying human wants, this process requires a complex industrial organization together with a large investment in equipments, plants and productive systems. These productive systems are employed to alter the physical environment and create consumer goods. As a result, they are consumed or become obsolete, inadequate, or otherwise candidates for replacement. When replacement is being considered, two assets must be evaluated: the present asset, the defender and its potential replacement, the challenger. Since the success of an industrial organization depends upon profit, replacement should generally occur if an economic advantage will result. Whatever the reason leading to the consideration of replacement, the analysis and decisions must be based upon estimates of what will occur in the future. In this paper we present the Mapi algorithm as a procedure for evaluating investments or for analyzing replacement opportunities.

19. Density Based Support Vector Machines for Classification

OpenAIRE

Zahra Nazari; Dongshik Kang

2015-01-01

Support Vector Machines (SVM) is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification) of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used cl...

20. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

OpenAIRE

Zuev, Yu. A.

2003-01-01

The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

1. Fractal vector optical fields.

Science.gov (United States)

Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

2016-07-15

We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field.

2. MPEG-2 Compressed-Domain Algorithms for Video Analysis

Directory of Open Access Journals (Sweden)

Hesseler Wolfgang

2006-01-01

Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

3. Glaucoma after corneal replacement.

Science.gov (United States)

Baltaziak, Monika; Chew, Hall F; Podbielski, Dominik W; Ahmed, Iqbal Ike K

Glaucoma is a well-known complication after corneal transplantation surgery. Traditional corneal transplantation surgery, specifically penetrating keratoplasty, has been slowly replaced by the advent of new corneal transplantation procedures: primarily lamellar keratoplasties. There has also been an emergence of keratoprosthesis implants for eyes that are high risk of failure with penetrating keratoplasty. Consequently, there are different rates of glaucoma, pathogenesis, and potential treatment in the form of medical, laser, or surgical therapy. Copyright © 2017 Elsevier Inc. All rights reserved.

4. The replacement research reactor

International Nuclear Information System (INIS)

Cameron, R.; Horlock, K.

2001-01-01

The contract for the design, construction and commissioning of the Replacement Research Reactor was signed in July 2000. This was followed by the completion of the detailed design and an application for a construction licence was made in May 2001. This paper will describe the main elements of the design and their relation to the proposed applications of the reactor. The future stages in the project leading to full operation are also described

5. Apparatus for fuel replacement

International Nuclear Information System (INIS)

1974-01-01

6. Noncausal Bayesian Vector Autoregression

DEFF Research Database (Denmark)

Lanne, Markku; Luoto, Jani

We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

7. Understanding Vector Fields.

Science.gov (United States)

Curjel, C. R.

1990-01-01

Presented are activities that help students understand the idea of a vector field. Included are definitions, flow lines, tangential and normal components along curves, flux and work, field conservation, and differential equations. (KR)

8. GAP Land Cover - Vector

Data.gov (United States)

Minnesota Department of Natural Resources — This vector dataset is a detailed (1-acre minimum), hierarchically organized vegetation cover map produced by computer classification of combined two-season pairs of...

9. Sesquilinear uniform vector integral

theory, together with his integral, dominate contemporary mathematics. ... directions belonging to Bartle and Dinculeanu (see [1], [6], [7] and [2]). ... in this manner, namely he integrated vector functions with respect to measures of bounded.

10. Tagged Vector Contour (TVC)

Data.gov (United States)

Kansas Data Access and Support Center — The Kansas Tagged Vector Contour (TVC) dataset consists of digitized contours from the 7.5 minute topographic quadrangle maps. Coverage for the state is incomplete....

11. Vector hysteresis models

Czech Academy of Sciences Publication Activity Database

Krejčí, Pavel

1991-01-01

Roč. 2, - (1991), s. 281-292 ISSN 0956-7925 Keywords : vector hysteresis operator * hysteresis potential * differential inequality Subject RIV: BA - General Mathematics http://www.math.cas.cz/~krejci/b15p.pdf

12. Support vector machines applications

CERN Document Server

Guo, Guodong

2014-01-01

Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

13. Exotic composite vector boson

International Nuclear Information System (INIS)

Akama, K.; Hattori, T.; Yasue, M.

1991-01-01

An exotic composite vector boson V is introduced in two dynamical models of composite quarks, leptons, W, and Z. One is based on four-Fermi interactions, in which composite vector bosons are regarded as fermion-antifermion bound states and the other is based on the confining SU(2) L gauge model, in which they are given by scalar-antiscalar bound states. Both approaches describe the same effective interactions for the sector of composite quarks, leptons, W, Z, γ, and V

14. Vector borne diseases

OpenAIRE

Melillo Fenech, Tanya

2010-01-01

A vector-borne disease is one in which the pathogenic microorganism is transmitted from an infected individual to another individual by an arthropod or other agent. The transmission depends upon the attributes and requirements of at least three different Iiving organisms : the pathologic agent which is either a virus, protozoa, bacteria or helminth (worm); the vector, which is commonly an arthropod such as ticks or mosquitoes; and the human host.

15. Developing operation algorithms for vision subsystems in autonomous mobile robots

Science.gov (United States)

Shikhman, M. V.; Shidlovskiy, S. V.

2018-05-01

The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

16. Attenuated Vector Tomography -- An Approach to Image Flow Vector Fields with Doppler Ultrasonic Imaging

International Nuclear Information System (INIS)

Huang, Qiu; Peng, Qiyu; Huang, Bin; Cheryauka, Arvi; Gullberg, Grant T.

2008-01-01

The measurement of flow obtained using continuous wave Doppler ultrasound is formulated as a directional projection of a flow vector field. When a continuous ultrasound wave bounces against a flowing particle, a signal is backscattered. This signal obtains a Doppler frequency shift proportional to the speed of the particle along the ultrasound beam. This occurs for each particle along the beam, giving rise to a Doppler velocity spectrum. The first moment of the spectrum provides the directional projection of the flow along the ultrasound beam. Signals reflected from points further away from the detector will have lower amplitude than signals reflected from points closer to the detector. The effect is very much akin to that modeled by the attenuated Radon transform in emission computed tomography.A least-squares method was adopted to reconstruct a 2D vector field from directional projection measurements. Attenuated projections of only the longitudinal projections of the vector field were simulated. The components of the vector field were reconstructed using the gradient algorithm to minimize a least-squares criterion. This result was compared with the reconstruction of longitudinal projections of the vector field without attenuation. If attenuation is known, the algorithm was able to accurately reconstruct both components of the full vector field from only one set of directional projection measurements. A better reconstruction was obtained with attenuation than without attenuation implying that attenuation provides important information for the reconstruction of flow vector fields.This confirms previous work where we showed that knowledge of the attenuation distribution helps in the reconstruction of MRI diffusion tensor fields from fewer than the required measurements. In the application of ultrasound the attenuation distribution is obtained with pulse wave transmission computed tomography and flow information is obtained with continuous wave Doppler

17. Vector financial rogue waves

International Nuclear Information System (INIS)

Yan, Zhenya

2011-01-01

The coupled nonlinear volatility and option pricing model presented recently by Ivancevic is investigated, which generates a leverage effect, i.e., stock volatility is (negatively) correlated to stock returns, and can be regarded as a coupled nonlinear wave alternative of the Black–Scholes option pricing model. In this Letter, we analytically propose vector financial rogue waves of the coupled nonlinear volatility and option pricing model without an embedded w-learning. Moreover, we exhibit their dynamical behaviors for chosen different parameters. The vector financial rogue wave (rogon) solutions may be used to describe the possible physical mechanisms for the rogue wave phenomena and to further excite the possibility of relative researches and potential applications of vector rogue waves in the financial markets and other related fields. -- Highlights: ► We investigate the coupled nonlinear volatility and option pricing model. ► We analytically present vector financial rogue waves. ► The vector financial rogue waves may be used to describe the extreme events in financial markets. ► This results may excite the relative researches and potential applications of vector rogue waves.

18. Improved stability and performance from sigma-delta modulators using 1-bit vector quantization

DEFF Research Database (Denmark)

Risbo, Lars

1993-01-01

A novel class of sigma-delta modulators is presented. The usual scalar 1-b quantizer in a sigma-delta modulator is replaced by a 1-b vector quantizer with a N-dimensional input state-vector from the linear feedback filter. Generally, the vector quantizer changes the nonlinear dynamics...... of the modulator, and a proper choice of vector quantizer can improve both system stability and coding performance. It is shown how to construct the vector quantizer in order to limit the excursions in state-space. The proposed method is demonstrated graphically for a simple second-order modulator...

19. Optimal Operation of Wind Turbines Based on Support Vector Machine and Differential Evolution Algorithm%基于支持向量机和微分进化算法的风电机优化运行

Institute of Scientific and Technical Information of China (English)

彭春华; 相龙阳; 刘刚; 易洪京

2012-01-01

Output control of wind turbines is the key item in the operation of wind farm. In view of complicated relations among wind turbine output, wind speed and blade pitch angle, it is hard to establish a versatile and accurate mathematical model. For this reason, a new mode to optimize wind turbine output is proposed： firstly a model for nonlinear fitting between wind turbine output and operational parameters is built; then based on the built model and the variation of wind speed and adopting the high-efficient differential evolution algorithm, the blade pitch angle of wind turbine is optimized fast and dynamically. Using the proposed method, the dynamic relation between wind speed and optimal blade pitch angle can be established. Simulation results of the proposed method show that the output of wind turbine can be effectively uprated, thus the feasibility of the proposed method is verified.%风电机出力控制是风电场运行过程中的一个关键问题。针对风电机出力与风速和桨距角之间存在非常复杂的关系，很难建立通用准确的数学模型，提出了一种新的风电机出力优化模式，即首先通过支持向量机算法建立风电机出力与运行参数之间的非线性拟合模型，并基于此模型和风速的变化，采用高效的微分进化算法对风力机桨距角进行快速动态优化，从而实现风电机出力最大化。以鄱阳湖长岭风电场风电机组实际运行数据进行了仿真应用与分析。结果表明通过优化风力机桨距角可有效地提高风电机出力，验证了文中方法的可行性和优越性。采用文中方法可准确建立风速与最优桨距角的动态对应关系，为风电机的优化运行提供了科学的指导。

20. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

Directory of Open Access Journals (Sweden)

Zhen Yao

2004-12-01

Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

1. Toleration, Synthesis or Replacement?

DEFF Research Database (Denmark)

2016-01-01

, in order to answer is not yet another partisan suggestion, but rather an attempt at making intelligible both the oppositions and the possibilities of synthesis between normative and empirical approaches to law. Based on our assessment and rational reconstruction of current arguments and positions, we...... therefore outline a taxonomy consisting of the following three basic, ideal-types in terms of the epistemological understanding of the interface of law and empirical studies: toleration, synthesis and replacement. This tripartite model proves useful with a view to teasing out and better articulating...

International Nuclear Information System (INIS)

Forbes, C.A.

1990-01-01

Ageing reactor simulators present some tough decisions for utility managers. Although most utilities have chosen the cheaper, upgrading solution as the best compromise between costs and outage length, some US utilities have found that for them, replacement represents the best option. Simulators may be less than ten years old, but they have limited instructor systems, older low fidelity models that cannot reproduce important training scenarios, and out of date, difficult to maintain computers that do not permit much expansion of the models anyway. Perhaps worse than this is the possibility that the simulator may no longer be a faithful reproduction of the referenced plant, or have poor (or non-existent) documentation. (author)

3. OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET

Directory of Open Access Journals (Sweden)

P. Kuppusamy

2014-09-01

Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.

4. Elements of mathematics topological vector spaces

CERN Document Server

Bourbaki, Nicolas

2003-01-01

This is a softcover reprint of the English translation of 1987 of the second edition of Bourbaki's Espaces Vectoriels Topologiques (1981). This second edition is a brand new book and completely supersedes the original version of nearly 30 years ago. But a lot of the material has been rearranged, rewritten, or replaced by a more up-to-date exposition, and a good deal of new material has been incorporated in this book, all reflecting the progress made in the field during the last three decades. Table of Contents. Chapter I: Topological vector spaces over a valued field. Chapter II: Convex sets and locally convex spaces. Chapter III: Spaces of continuous linear mappings. Chapter IV: Duality in topological vector spaces. Chapter V: Hilbert spaces (elementary theory). Finally, there are the usual "historical note", bibliography, index of notation, index of terminology, and a list of some important properties of Banach spaces. (Based on Math Reviews, 1983).

5. Ultrasound Vector Flow Imaging: Part I: Sequential Systems

DEFF Research Database (Denmark)

Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

2016-01-01

, and variants of these. The review covers both 2-D and 3-D velocity estimation and gives a historical perspective on the development along with a summary of various vector flow visualization algorithms. The current state-of-the-art is explained along with an overview of clinical studies conducted and methods......The paper gives a review of the most important methods for blood velocity vector flow imaging (VFI) for conventional, sequential data acquisition. This includes multibeam methods, speckle tracking, transverse oscillation, color flow mapping derived vector flow imaging, directional beamforming...

6. Community detection in complex networks using proximate support vector clustering

Science.gov (United States)

Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing

2018-03-01

Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.

7. Power Plant Replacement Study

Energy Technology Data Exchange (ETDEWEB)

Reed, Gary

2010-09-30

This report represents the final report for the Eastern Illinois University power plant replacement study. It contains all related documentation from consideration of possible solutions to the final recommended option. Included are the economic justifications associated with the chosen solution along with application for environmental permitting for the selected project for construction. This final report will summarize the results of execution of an EPC (energy performance contract) investment grade audit (IGA) which lead to an energy services agreement (ESA). The project includes scope of work to design and install energy conservation measures which are guaranteed by the contractor to be self-funding over its twenty year contract duration. The cost recovery is derived from systems performance improvements leading to energy savings. The prime focus of this EPC effort is to provide a replacement solution for Eastern Illinois University's aging and failing circa 1925 central steam production plant. Twenty-three ECMs were considered viable whose net impact will provide sufficient savings to successfully support the overall project objectives.

Energy Technology Data Exchange (ETDEWEB)

Nelson, M.J.; Groshart, E.C.

1995-03-01

The Boeing Company has been searching for replacements to cadmium plate. Two alloy plating systems seem close to meeting the needs of a cadmium replacement. The two alloys, zinc-nickel and tin-zinc are from alloy plating baths; both baths are neutral pH. The alloys meet the requirements for salt fog corrosion resistance, and both alloys excel as a paint base. Currently, tests are being performed on standard fasteners to compare zinc-nickel and tin-zinc on threaded hardware where cadmium is heavily used. The Hydrogen embrittlement propensity of the zinc-nickel bath has been tested, and just beginning for the tin-zinc bath. Another area of interest is the electrical properties on aluminum for tin-zinc and will be discussed. The zinc-nickel alloy plating bath is in production in Boeing Commercial Airplane Group for non-critical low strength steels. The outlook is promising that these two coatings will help The Boeing Company significantly reduce its dependence on cadmium plating.

9. REPLACEMENT OF FRENCH CARDS

CERN Multimedia

Human Resources Division

2001-01-01

The French Ministry of Foreign Affairs has informed the Organization that it is shortly to replace all diplomatic cards, special cards and employment permits ('attestations de fonctions') now held by members of the personnel and their families. Between 2 July and 31 December 2001, these cards are to be replaced by secure, computerized equivalents. A 'personnel office' stamped photocopy of the old cards may continue to be used until 31 December 2001. For the purposes of the handover, members of the personnel must go personally to the cards office (33/1-015), between 8:30 and 12:30, in order to fill a 'fiche individuelle' form (in black ink only), which has to be personally signed by themselves and another separately signed by members of their family, taking the following documents for themselves and members of their families already in possession of a French card : A recent identity photograph in 4.5 cm x 3.5 cm format (signed on the back) The French card in their possession an A4 photocopy of the same Fre...

10. Power Plant Replacement Study

Energy Technology Data Exchange (ETDEWEB)

Reed, Gary

2010-09-30

This report represents the final report for the Eastern Illinois University power plant replacement study. It contains all related documentation from consideration of possible solutions to the final recommended option. Included are the economic justifications associated with the chosen solution along with application for environmental permitting for the selected project for construction. This final report will summarize the results of execution of an EPC (energy performance contract) investment grade audit (IGA) which lead to an energy services agreement (ESA). The project includes scope of work to design and install energy conservation measures which are guaranteed by the contractor to be self-funding over its twenty year contract duration. The cost recovery is derived from systems performance improvements leading to energy savings. The prime focus of this EPC effort is to provide a replacement solution for Eastern Illinois University’s aging and failing circa 1925 central steam production plant. Twenty-three ECMs were considered viable whose net impact will provide sufficient savings to successfully support the overall project objectives.

11. Graphics and visualization principles & algorithms

CERN Document Server

Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

2008-01-01

Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

12. Vector Fields on Product Manifolds

OpenAIRE

Kurz, Stefan

2011-01-01

This short report establishes some basic properties of smooth vector fields on product manifolds. The main results are: (i) On a product manifold there always exists a direct sum decomposition into horizontal and vertical vector fields. (ii) Horizontal and vertical vector fields are naturally isomorphic to smooth families of vector fields defined on the factors. Vector fields are regarded as derivations of the algebra of smooth functions.

13. Bunyavirus-Vector Interactions

Directory of Open Access Journals (Sweden)

Kate McElroy Horne

2014-11-01

Full Text Available The Bunyaviridae family is comprised of more than 350 viruses, of which many within the Hantavirus, Orthobunyavirus, Nairovirus, Tospovirus, and Phlebovirus genera are significant human or agricultural pathogens. The viruses within the Orthobunyavirus, Nairovirus, and Phlebovirus genera are transmitted by hematophagous arthropods, such as mosquitoes, midges, flies, and ticks, and their associated arthropods not only serve as vectors but also as virus reservoirs in many cases. This review presents an overview of several important emerging or re-emerging bunyaviruses and describes what is known about bunyavirus-vector interactions based on epidemiological, ultrastructural, and genetic studies of members of this virus family.

14. Sums and Gaussian vectors

CERN Document Server

1995-01-01

Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

15. Duality in vector optimization

CERN Document Server

2009-01-01

This book presents fundamentals and comprehensive results regarding duality for scalar, vector and set-valued optimization problems in a general setting. After a preliminary chapter dedicated to convex analysis and minimality notions of sets with respect to partial orderings induced by convex cones a chapter on scalar conjugate duality follows. Then investigations on vector duality based on scalar conjugacy are made. Weak, strong and converse duality statements are delivered and connections to classical results from the literature are emphasized. One chapter is exclusively consecrated to the s

Science.gov (United States)

Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

2018-01-16

In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

17. Matrix vector analysis

CERN Document Server

Eisenman, Richard L

2005-01-01

This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur

18. Free topological vector spaces

OpenAIRE

Gabriyelyan, Saak S.; Morris, Sidney A.

2016-01-01

We define and study the free topological vector space $\\mathbb{V}(X)$ over a Tychonoff space $X$. We prove that $\\mathbb{V}(X)$ is a $k_\\omega$-space if and only if $X$ is a $k_\\omega$-space. If $X$ is infinite, then $\\mathbb{V}(X)$ contains a closed vector subspace which is topologically isomorphic to $\\mathbb{V}(\\mathbb{N})$. It is proved that if $X$ is a $k$-space, then $\\mathbb{V}(X)$ is locally convex if and only if $X$ is discrete and countable. If $X$ is a metrizable space it is shown ...

19. Scalar-vector bootstrap

Energy Technology Data Exchange (ETDEWEB)

Rejon-Barrera, Fernando [Institute for Theoretical Physics, University of Amsterdam,Science Park 904, Postbus 94485, 1090 GL, Amsterdam (Netherlands); Robbins, Daniel [Department of Physics, Texas A& M University,TAMU 4242, College Station, TX 77843 (United States)

2016-01-22

We work out all of the details required for implementation of the conformal bootstrap program applied to the four-point function of two scalars and two vectors in an abstract conformal field theory in arbitrary dimension. This includes a review of which tensor structures make appearances, a construction of the projectors onto the required mixed symmetry representations, and a computation of the conformal blocks for all possible operators which can be exchanged. These blocks are presented as differential operators acting upon the previously known scalar conformal blocks. Finally, we set up the bootstrap equations which implement crossing symmetry. Special attention is given to the case of conserved vectors, where several simplifications occur.

20. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

Directory of Open Access Journals (Sweden)

Fang Su

2013-01-01

Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

1. Replace with abstract title

Science.gov (United States)

Coho, Aleksander; Kioussis, Nicholas

2003-03-01

We use the semidiscrete variational generelized Peierls-Nabarro model to study the effect of Cu alloying on the dislocation properties of Al. First-principles density functional theory (DFT) is used to calculate the generalized-stacking-fault (GSF) energy surface when a plane, on which one in four Al atoms has been replaced with a Cu atom, slips over a pure Al plane. Various dislocation core properties (core width, energy, Peierls stress, dissociation tendency) are investigated and compared with the pure Al case. Cu alloying lowers the intrinsic stacking fault (ISF) energy, which makes dislocations more likely to dissociate into partials. We also try to understand the lowering of ISF energy in terms of Al-Cu and Al-Al bond formation and braking during shearing along the direction. From the above we draw conclusions about the effects of Cu alloying on the mechanical properties of Al.

2. Iron replacement therapy

DEFF Research Database (Denmark)

Nielsen, Ole Haagen; Coskun, Mehmet; Weiss, Günter

2016-01-01

PURPOSE OF REVIEW: Approximately, one-third of the world's population suffers from anemia, and at least half of these cases are because of iron deficiency. With the introduction of new intravenous iron preparations over the last decade, uncertainty has arisen when these compounds should...... be administered and under which circumstances oral therapy is still an appropriate and effective treatment. RECENT FINDINGS: Numerous guidelines are available, but none go into detail about therapeutic start and end points or how iron-deficiency anemia should be best treated depending on the underlying cause...... of iron deficiency or in regard to concomitant underlying or additional diseases. SUMMARY: The study points to major issues to be considered in revisions of future guidelines for the true optimal iron replacement therapy, including how to assess the need for treatment, when to start and when to stop...

3. Total ankle joint replacement.

Science.gov (United States)

2016-02-01

Ankle arthritis results in a stiff and painful ankle and can be a major cause of disability. For people with end-stage ankle arthritis, arthrodesis (ankle fusion) is effective at reducing pain in the shorter term, but results in a fixed joint, and over time the loss of mobility places stress on other joints in the foot that may lead to arthritis, pain and dysfunction. Another option is to perform a total ankle joint replacement, with the aim of giving the patient a mobile and pain-free ankle. In this article we review the efficacy of this procedure, including how it compares to ankle arthrodesis, and consider the indications and complications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

4. Support vector machines in analysis of top quark production

International Nuclear Information System (INIS)

Vaiciulis, A.

2003-01-01

The Support Vector Machine (SVM) learning algorithm is a new alternative to multivariate methods such as neural networks. Potential applications of SVMs in high energy physics include the common classification problem of signal/background discrimination as well as particle identification. A comparison of a conventional method and an SVM algorithm is presented here for the case of identifying top quark events in Run II physics at the CDF experiment

5. A new hybrid imperialist competitive algorithm on data clustering

Modified imperialist competitive algorithm; simulated annealing; ... Clustering is one of the unsupervised learning branches where a set of patterns, usually vectors ..... machine classification is based on design, operation, and/or purpose.

6. Experimental Evaluation of Integral Transformations for Engineering Drawings Vectorization

Directory of Open Access Journals (Sweden)

2014-12-01

Full Text Available The concept of digital manufacturing supposes application of digital technologies in the whole product life cycle. Direct digital manufacturing includes such information technology processes, where products are directly manufactured from 3D CAD model. In digital manufacturing, engineering drawing is replaced by CAD product model. In the contemporary practice, lots of engineering paper-based drawings are still archived. They could be digitalized by scanner and stored to one of the raster graphics format and after that vectorized for interactive editing in the specific software system for technical drawing or for archiving in some of the standard vector graphics file format. The vector format is suitable for 3D model generating, too.The article deals with using of selected integral transformations (Fourier, Hough in the phase of digitalized raster engineering drawings vectorization.

7. Estimation of vector velocity

DEFF Research Database (Denmark)

2000-01-01

Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

8. Production of lentiviral vectors

Directory of Open Access Journals (Sweden)

Otto-Wilhelm Merten

2016-01-01

Full Text Available Lentiviral vectors (LV have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented.

9. Orthogonalisation of Vectors

The Gram-Schmidt process is one of the first things one learns in a course ... We might want to stay as close to the experimental data as possible when converting these vectors to orthonormal ones demanded by the model. The process of finding the closest or- thonormal .... is obtained by writing the matrix A = [aI, an], then.

10. Calculus with vectors

CERN Document Server

Treiman, Jay S

2014-01-01

Calculus with Vectors grew out of a strong need for a beginning calculus textbook for undergraduates who intend to pursue careers in STEM. fields. The approach introduces vector-valued functions from the start, emphasizing the connections between one-variable and multi-variable calculus. The text includes early vectors and early transcendentals and includes a rigorous but informal approach to vectors. Examples and focused applications are well presented along with an abundance of motivating exercises. All three-dimensional graphs have rotatable versions included as extra source materials and may be freely downloaded and manipulated with Maple Player; a free Maple Player App is available for the iPad on iTunes. The approaches taken to topics such as the derivation of the derivatives of sine and cosine, the approach to limits, and the use of "tables" of integration have been modified from the standards seen in other textbooks in order to maximize the ease with which students may comprehend the material. Additio...

11. On vector equilibrium problem

[G] Giannessi F, Theorems of alternative, quadratic programs and complementarity problems, in: Variational Inequalities and Complementarity Problems (eds) R W Cottle, F Giannessi and J L Lions (New York: Wiley) (1980) pp. 151±186. [K1] Kazmi K R, Existence of solutions for vector optimization, Appl. Math. Lett. 9 (1996).

12. Vector-borne Infections

Centers for Disease Control (CDC) Podcasts

2011-04-18

This podcast discusses emerging vector-borne pathogens, their role as prominent contributors to emerging infectious diseases, how they're spread, and the ineffectiveness of mosquito control methods.  Created: 4/18/2011 by National Center for Emerging Zoonotic and Infectious Diseases (NCEZID).   Date Released: 4/27/2011.

13. Improved Interpolation Kernels for Super-resolution Algorithms

DEFF Research Database (Denmark)

Rasti, Pejman; Orlova, Olga; Tamberg, Gert

2016-01-01

Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

14. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

Science.gov (United States)

Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

2017-12-01

Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

15. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

Directory of Open Access Journals (Sweden)

Hailun Wang

2017-01-01

Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

16. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

Science.gov (United States)

Bektešević, Dino; Vinković, Dejan

2017-11-01

Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

17. Algorithmic alternatives

International Nuclear Information System (INIS)

Creutz, M.

1987-11-01

A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

18. Combinatorial algorithms

CERN Document Server

Hu, T C

2002-01-01

Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

19. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

Directory of Open Access Journals (Sweden)

Ibrahim Baz

2008-04-01

Full Text Available This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction, for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD, indicated that the model can successfully vectorize the specified raster data quickly and accurately.

20. Crisp Clustering Algorithm for 3D Geospatial Vector Data Quantization

DEFF Research Database (Denmark)

Azri, Suhaibah; Anton, François; Ujang, Uznir

2015-01-01

In the next few years, 3D data is expected to be an intrinsic part of geospatial data. However, issues on 3D spatial data management are still in the research stage. One of the issues is performance deterioration during 3D data retrieval. Thus, a practical 3D index structure is required for effic...

1. Development of an algorithm for 20Dimenstiona vector geometry in ...

African Journals Online (AJOL)

2. A bibliography on parallel and vector numerical algorithms

Science.gov (United States)

Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

1988-01-01

This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

3. Matrix Multiplication Algorithm Selection with Support Vector Machines

Science.gov (United States)

2015-05-01

STARnet, a Semiconductor Re- search Corporation program sponsored by MARCO and DARPA), and ASPIRE Lab industrial sponsors and af- filiates Intel, Google...Nokia, NVIDIA , and Oracle. Any opinions, findings, conclusions, or recommendations in this paper are solely those of the authors and does not neces

4. Numerical solution of integral equations, describing mass spectrum of vector mesons

International Nuclear Information System (INIS)

Zhidkov, E.P.; Nikonov, E.G.; Sidorov, A.V.; Skachkov, N.B.; Khoromskij, B.N.

1988-01-01

The description of the numerical algorithm for solving quasipotential integral equation in impulse space is presented. The results of numerical computations of the vector meson mass spectrum and the leptonic decay width are given in comparison with the experimental data

5. Vector control of three-phase AC/DC front-end converter

directional power ﬂow capability. A design procedure for selection of control parameters is discussed. A simple algorithm for unit-vector generation is presented. Starting current transients are studied with particular emphasis on high-power ...

6. Application of Hybrid Quantum Tabu Search with Support Vector Regression (SVR for Load Forecasting

Directory of Open Access Journals (Sweden)

Cheng-Wen Lee

2016-10-01

Full Text Available Hybridizing chaotic evolutionary algorithms with support vector regression (SVR to improve forecasting accuracy is a hot topic in electricity load forecasting. Trapping at local optima and premature convergence are critical shortcomings of the tabu search (TS algorithm. This paper investigates potential improvements of the TS algorithm by applying quantum computing mechanics to enhance the search information sharing mechanism (tabu memory to improve the forecasting accuracy. This article presents an SVR-based load forecasting model that integrates quantum behaviors and the TS algorithm with the support vector regression model (namely SVRQTS to obtain a more satisfactory forecasting accuracy. Numerical examples demonstrate that the proposed model outperforms the alternatives.

7. Evaluating automatically parallelized versions of the support vector machine

NARCIS (Netherlands)

Codreanu, V.; Dröge, B.; Williams, D.; Yasar, B.; Yang, P.; Liu, B.; Dong, F.; Surinta, O.; Schomaker, L.R.B.; Roerdink, J.B.T.M.; Wiering, M.A.

2016-01-01

The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the

8. Development of precursors recognition methods in vector signals

Science.gov (United States)

Kapralov, V. G.; Elagin, V. V.; Kaveeva, E. G.; Stankevich, L. A.; Dremin, M. M.; Krylov, S. V.; Borovov, A. E.; Harfush, H. A.; Sedov, K. S.

2017-10-01

Precursor recognition methods in vector signals of plasma diagnostics are presented. Their requirements and possible options for their development are considered. In particular, the variants of using symbolic regression for building a plasma disruption prediction system are discussed. The initial data preparation using correlation analysis and symbolic regression is discussed. Special attention is paid to the possibility of using algorithms in real time.

9. Support vector machine: a tool for mapping mineral prospectivity

NARCIS (Netherlands)

Zuo, R.; Carranza, E.J.M

2011-01-01

In this contribution, we describe an application of support vector machine (SVM), a supervised learning algorithm, to mineral prospectivity mapping. The free R package e1071 is used to construct a SVM with sigmoid kernel function to map prospectivity for Au deposits in western Meguma Terrain of Nova

10. Evaluating automatically parallelized versions of the support vector machine

NARCIS (Netherlands)

Codreanu, Valeriu; Droge, Bob; Williams, David; Yasar, Burhan; Yang, Fo; Liu, Baoquan; Dong, Feng; Surinta, Olarik; Schomaker, Lambertus; Roerdink, Jos; Wiering, Marco

2014-01-01

The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the

11. Reconfigurable support vector machine classifier with approximate computing

NARCIS (Netherlands)

van Leussen, M.J.; Huisken, J.; Wang, L.; Jiao, H.; De Gyvez, J.P.

2017-01-01

Support Vector Machine (SVM) is one of the most popular machine learning algorithms. An energy-efficient SVM classifier is proposed in this paper, where approximate computing is utilized to reduce energy consumption and silicon area. A hardware architecture with reconfigurable kernels and

12. Infinite ensemble of support vector machines for prediction of ...

African Journals Online (AJOL)

Many researchers have demonstrated the use of artificial neural networks (ANNs) to predict musculoskeletal disorders risk associated with occupational exposures. In order to improve the accuracy of LBDs risk classification, this paper proposes to use the support vector machines (SVMs), a machine learning algorithm used ...

13. Support Vector Machines: Relevance Feedback and Information Retrieval.

Science.gov (United States)

Drucker, Harris; Shahrary, Behzad; Gibbon, David C.

2002-01-01

Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in information retrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…

14. Autodriver algorithm

Directory of Open Access Journals (Sweden)

Anna Bourmistrova

2011-02-01

15. Eigenvalue Decomposition-Based Modified Newton Algorithm

Directory of Open Access Journals (Sweden)

Wen-jun Wang

2013-01-01

Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

16. Vector grammars and PN machines

Institute of Scientific and Technical Information of China (English)

蒋昌俊

1996-01-01

The concept of vector grammars under the string semantic is introduced.The dass of vector grammars is given,which is similar to the dass of Chomsky grammars.The regular vector grammar is divided further.The strong and weak relation between the vector grammar and scalar grammar is discussed,so the spectrum system graph of scalar and vector grammars is made.The equivalent relation between the regular vector grammar and Petri nets (also called PN machine) is pointed.The hybrid PN machine is introduced,and its language is proved equivalent to the language of the context-free vector grammar.So the perfect relation structure between vector grammars and PN machines is formed.

17. On the existence of polynomial Lyapunov functions for rationally stable vector fields

DEFF Research Database (Denmark)

Leth, Tobias; Wisniewski, Rafal; Sloth, Christoffer

2018-01-01

This paper proves the existence of polynomial Lyapunov functions for rationally stable vector fields. For practical purposes the existence of polynomial Lyapunov functions plays a significant role since polynomial Lyapunov functions can be found algorithmically. The paper extents an existing result...... on exponentially stable vector fields to the case of rational stability. For asymptotically stable vector fields a known counter example is investigated to exhibit the mechanisms responsible for the inability to extend the result further....

18. Vehicle Based Vector Sensor

Science.gov (United States)

2015-09-28

buoyant underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength...underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength; an...unmanned underwater vehicle that can function as an acoustic vector sensor. (2) Description of the Prior Art [0004] It is known that a propagating

19. Reciprocity in Vector Acoustics

Science.gov (United States)

2017-03-01

Green’s Theorem to the left hand side of Equation (3.2) converts it to a surface integral that vanishes for the impedance boundary conditions one...There are situations where this assumption does not hold, such as at boundaries between layers or in an inhomogeneous layer , because the density gradient...instead of requiring one model run for each source location. Application of the vector-scalar reciprocity principle is demonstrated with analytic

20. Advances in the replacement and enhanced replacement method in QSAR and QSPR theories.

Science.gov (United States)

Mercader, Andrew G; Duchowicz, Pablo R; Fernández, Francisco M; Castro, Eduardo A

2011-07-25

The selection of an optimal set of molecular descriptors from a much greater pool of such regression variables is a crucial step in the development of QSAR and QSPR models. The aim of this work is to further improve this important selection process. For this reason three different alternatives for the initial steps of our recently developed enhanced replacement method (ERM) and replacement method (RM) are proposed. These approaches had previously proven to yield near optimal results with a much smaller number of linear regressions than the full search. The algorithms were tested on four different experimental data sets, formed by collections of 116, 200, 78, and 100 experimental records from different compounds and 1268, 1338, 1187, and 1306 molecular descriptors, respectively. The comparisons showed that one of the new alternatives further improves the ERM, which has shown to be superior to genetic algorithms for the selection of an optimal set of molecular descriptors from a much greater pool. The new proposed alternative also improves the simpler and the lower computational demand algorithm RM.

1. Contact replacement for NMR resonance assignment.

Science.gov (United States)

Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris

2008-07-01

Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.

2. Tensor Calculus: Unlearning Vector Calculus

Science.gov (United States)

Lee, Wha-Suck; Engelbrecht, Johann; Moller, Rita

2018-01-01

Tensor calculus is critical in the study of the vector calculus of the surface of a body. Indeed, tensor calculus is a natural step-up for vector calculus. This paper presents some pitfalls of a traditional course in vector calculus in transitioning to tensor calculus. We show how a deeper emphasis on traditional topics such as the Jacobian can…

3. Hip Replacement: MedlinePlus Health Topic

Science.gov (United States)

... invasive hip replacement (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Hip Replacement updates ... replacement - precautions Minimally invasive hip replacement Related Health Topics Hip Injuries and Disorders National Institutes of Health ...

4. Phase matching in quantum searching and the improved Grover algorithm

International Nuclear Information System (INIS)

Long Guilu; Li Yansong; Xiao Li; Tu Changcun; Sun Yang

2004-01-01

The authors briefly introduced some of our recent work related to the phase matching condition in quantum searching algorithms and the improved Grover algorithm. When one replaces the two phase inversions in the Grover algorithm with arbitrary phase rotations, the modified algorithm usually fails in searching the marked state unless a phase matching condition is satisfied between the two phases. the Grover algorithm is not 100% in success rate, an improved Grover algorithm with zero-failure rate is given by replacing the phase inversions with angles that depends on the size of the database. Other aspects of the Grover algorithm such as the SO(3) picture of quantum searching, the dominant gate imperfections in the Grover algorithm are also mentioned. (author)

5. Robust point matching via vector field consensus.

Science.gov (United States)

Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

2014-04-01

In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

6. Optimized support vector regression for drilling rate of penetration estimation

Science.gov (United States)

Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa

2015-12-01

In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.

7. Educating My Replacement

Science.gov (United States)

Tarter, Jill

, in partnership with the dedicated teachers out there, I think I can help promote the critical thinking skills and scientific literacy of the next generation of voters. Hopefully, I can also help train my replacement to be a better scientist, capable of seizing all the opportunities generated by advances in technology and our improved understanding of the universe to craft search strategies with greater probability of success than those I have initiated.

8. Algorithmic Self

DEFF Research Database (Denmark)

Markham, Annette

This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

9. Hierarchal scalar and vector tetrahedra

International Nuclear Information System (INIS)

Webb, J.P.; Forghani, B.

1993-01-01

A new set of scalar and vector tetrahedral finite elements are presented. The elements are hierarchal, allowing mixing of polynomial orders; scalar orders up to 3 and vector orders up to 2 are defined. The vector elements impose tangential continuity on the field but not normal continuity, making them suitable for representing the vector electric or magnetic field. Further, the scalar and vector elements are such that they can easily be used in the same mesh, a requirement of many quasi-static formulations. Results are presented for two 50 Hz problems: the Bath Cube, and TEAM Problem 7

10. Leishmaniasis vector behaviour in Kenya

International Nuclear Information System (INIS)

Mutinga, M.J.

1980-01-01

Leishmaniasis in Kenya exists in two forms: cutaneous and visceral. The vectors of visceral leishmaniasis have been the subject of investigation by various researchers since World War II, when the outbreak of the disease was first noticed. The vectors of cutaneous leishmaniasis were first worked on only a decade ago after the discovery of the disease focus in Mt. Elgon. The vector behaviour of these diseases, namely Phlebotomus pedifer, the vector of cutaneous leishmaniasis, and Phlebotomus martini, the vector of visceral leishmaniasis, are discussed in detail. P. pedifer has been found to breed and bite inside caves, whereas P. martini mainly bites inside houses. (author)

11. Transforming Normal Programs by Replacement

NARCIS (Netherlands)

Bossi, Annalisa; Pettorossi, A.; Cocco, Nicoletta; Etalle, Sandro

1992-01-01

The replacement transformation operation, already defined in [28], is studied wrt normal programs. We give applicability conditions able to ensure the correctness of the operation wrt Fitting's and Kunen's semantics. We show how replacement can mimic other transformation operations such as thinning,

12. A multistage motion vector processing method for motion-compensated frame interpolation.

Science.gov (United States)

Huang, Ai- Mei; Nguyen, Truong Q

2008-05-01

In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

13. Minimally invasive aortic valve replacement

DEFF Research Database (Denmark)

Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K

2009-01-01

In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy...... operations were completed as mini-sternotomies, 4 died later of noncardiac causes. The aortic cross-clamp and perfusion times were significantly different across all groups (P replacement...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...

14. Parallel algorithms for numerical linear algebra

CERN Document Server

van der Vorst, H

1990-01-01

This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

15. Universal algorithm of time sharing

International Nuclear Information System (INIS)

Silin, I.N.; Fedyun'kin, E.D.

1979-01-01

Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

16. A Turn-Projected State-Based Conflict Resolution Algorithm

Science.gov (United States)

Butler, Ricky W.; Lewis, Timothy A.

2013-01-01

State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

17. Vector-borne diseases

DEFF Research Database (Denmark)

More, Simon J.; Bicout, Dominique; Bøtner, Anette

2017-01-01

After a request from the Europea n Commission, EFSA’s Panel on Animal Health and Welfaresummarised the main characteristics of 36 vector-borne disease s (VBDs) in 36 web-based storymaps.The risk of introduction in the EU through movement of livestock or pets was assessed for eac h of the36 VBDs......-agents for which the rate of introduction wasestimated to be very low, no further asse ssments were made. Due to the uncertainty related to someparameters used for the risk assessment or the instable or unpredictability disease situation in some ofthe source regions, it is recommended to update the assessment when...

18. Scalar and vector Galileons

International Nuclear Information System (INIS)

Rodríguez, Yeinzon; Navarro, Andrés A.

2017-01-01

An alternative for the construction of fundamental theories is the introduction of Galileons. These are fields whose action leads to non higher than second-order equations of motion. As this is a necessary but not sufficient condition to make the Hamiltonian bounded from below, as long as the action is not degenerate, the Galileon construction is a way to avoid pathologies both at the classical and quantum levels. Galileon actions are, therefore, of great interest in many branches of physics, specially in high energy physics and cosmology. This proceedings contribution presents the generalities of the construction of both scalar and vector Galileons following two different but complimentary routes. (paper)

19. Vectors to success

International Nuclear Information System (INIS)

Otsason, J.

1998-01-01

The Vector Pipeline project linking the Chicago supply hub to markets in eastern Canada, the northeastern U.S. and the Mid-Atlantic states, is described. Subsidiary objectives of the promoters are to match market timing to upstream pipelines and market requirements, and to provide low cost expandability to complement upstream expandability. The presentation includes description of the project, costs, leased facilities, rates and tariffs, right of way considerations, storage facilities and a project schedule. Construction is to begin in March 1999 and the line should be in service in November 1999

20. Application of Bred Vectors To Data Assimilation

Science.gov (United States)

Corazza, M.; Kalnay, E.; Patil, Dj

subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 1835­1851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute

1. Cache-Oblivious Algorithms and Data Structures

DEFF Research Database (Denmark)

Brodal, Gerth Stølting

2004-01-01

Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

2. Conjugate gradient algorithms using multiple recursions

Energy Technology Data Exchange (ETDEWEB)

Barth, T.; Manteuffel, T.

1996-12-31

Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

3. Patients Unicondylar Knee Replacement vs. Total Knee Replacement

OpenAIRE

Hedra Eskander

2017-01-01

The aim of this review article is to analyse the clinical effectiveness of total knee replacement (TKR) compared to unicondylar knee replacement (UKR) on patients. In terms of survival rates, revision rates and postoperative complications. The keywords used were: knee arthroplasty. Nearly three thousand articles were found on 25 August 2016. Of those, only twenty-five were selected and reviewed because they were strictly focused on the topic of this article. Compared with those who have TKR, ...

4. Parallel algorithms

CERN Document Server

Casanova, Henri; Robert, Yves

2008-01-01

""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

5. Algorithm 865

DEFF Research Database (Denmark)

Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

2007-01-01

We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

6. Vector-vector production in photon-photon interactions

International Nuclear Information System (INIS)

Ronan, M.T.

1988-01-01

Measurements of exclusive untagged /rho/ 0 /rho/ 0 , /rho//phi/, K/sup *//bar K//sup */, and /rho/ω production and tagged /rho/ 0 /rho/ 0 production in photon-photon interactions by the TPC/Two-Gamma experiment are reviewed. Comparisons to the results of other experiments and to models of vector-vector production are made. Fits to the data following a four quark model prescription for vector meson pair production are also presented. 10 refs., 9 figs

7. Empirical study of parallel LRU simulation algorithms

Science.gov (United States)

Carr, Eric; Nicol, David M.

1994-01-01

This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

8. ILUCG algorithm which minimizes in the Euclidean norm

International Nuclear Information System (INIS)

Petravic, M.; Kuo-Petravic, G.

1978-07-01

An algroithm is presented which solves sparse systems of linear equations of the form Ax = Y, where A is non-symmetric, by the Incomplete LU Decomposition-Conjugate Gradient (ILUCG) method. The algorithm minimizes the error in the Euclidean norm vertical bar x/sub i/ - x vertical bar 2 , where x/sub i/ is the solution vector after the i/sup th/ iteration and x the exact solution vector. The results of a test on one real problem indicate that the algorithm is likely to be competitive with the best existing algorithms of its type

9. Vertical vector face lift.

Science.gov (United States)

Somoano, Brian; Chan, Joanna; Morganroth, Greg

2011-01-01

Facial rejuvenation using local anesthesia has evolved in the past decade as a safer option for patients seeking fewer complications and minimal downtime. Mini- and short-scar face lifts using more conservative incision lengths and extent of undermining can be effective in the younger patient with lower face laxity and minimal loose, elastotic neck skin. By incorporating both an anterior and posterior approach and using an incision length between the mini and more traditional face lift, the Vertical Vector Face Lift can achieve longer-lasting and natural results with lesser cost and risk. Submentoplasty and liposuction of the neck and jawline, fundamental components of the vertical vector face lift, act synergistically with superficial musculoaponeurotic system plication to reestablish a more youthful, sculpted cervicomental angle, even in patients with prominent jowls. Dramatic results can be achieved in the right patient by combining with other procedures such as injectable fillers, chin implants, laser resurfacing, or upper and lower blepharoplasties. © 2011 Wiley Periodicals, Inc.

10. Vector control in leishmaniasis.

Science.gov (United States)

Kishore, K; Kumar, V; Kesari, S; Dinesh, D S; Kumar, A J; Das, P; Bhattacharya, S K

2006-03-01

Indoor residual spraying is a simple and cost effective method of controlling endophilic vectors and DDT remains the insecticide of choice for the control of leishmaniasis. However resistance to insecticide is likely to become more widespread in the population especially in those areas in which insecticide has been used for years. In this context use of slow release emulsified suspension (SRES) may be the best substitute. In this review spraying frequencies of DDT and new schedule of spray have been discussed. Role of biological control and environment management in the control of leishmaniasis has been emphasized. Allethrin (coil) 0.1 and 1.6 per cent prallethrin (liquid) have been found to be effective repellents against Phlebotomus argentipes, the vector of Indian kalaazar. Insecticide impregnated bednets is another area which requires further research on priority basis for the control of leishmaniasis. Role of satellite remote sensing for early prediction of disease by identifying the sandflygenic conditions cannot be undermined. In future synthetic pheromons can be exploited in the control of leishmaniasis.

11. Bridge health monitoring metrics : updating the bridge deficiency algorithm.

Science.gov (United States)

2009-10-01

As part of its bridge management system, the Alabama Department of Transportation (ALDOT) must decide how best to spend its bridge replacement funds. In making these decisions, ALDOT managers currently use a deficiency algorithm to rank bridges that ...

12. Extended SVM algorithms for multilevel trans-Z-source inverter

Directory of Open Access Journals (Sweden)

Aida Baghbany Oskouei

2016-03-01

Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.

13. A Motion Estimation Algorithm Using DTCWT and ARPS

Directory of Open Access Journals (Sweden)

Unan Y. Oktiawati

2013-09-01

Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

14. Counting Subspaces of a Finite Vector Space – 1

ply refer the reader to [2], where an exposition of Gauss's proof (among .... obtained. The above process can be easily reversed: let e1;:::;ek denote the k coordinate vectors in Fn, written as col- umns. Starting with a Ferrers diagram ¸ in a k×(n−k) grid, replace ... consists of n segments of unit length, of which k are vertical and ...

15. Experimental demonstration of vector E x vector B plasma divertor

International Nuclear Information System (INIS)

Strait, E.J.; Kerst, D.W.; Sprott, J.C.

1977-01-01

The vector E x vector B drift due to an applied radial electric field in a tokamak with poloidal divertor can speed the flow of plasma out of the scrape-off region, and provide a means of externally controlling the flow rate and thus the width of the density fall-off. An experiment in the Wisconsin levitated toroidal octupole, using vector E x vector B drifts alone, demonstrates divertor-like behavior, including 70% reduction of plasma density near the wall and 40% reduction of plasma flux to the wall, with no adverse effects on confinement of the main plasma

16. Vectorization and multitasking with a Monte-Carlo code for neutron transport problems

International Nuclear Information System (INIS)

Chauvet, Y.

1985-04-01

This paper summarizes two improvements of a Monte Carlo code by resorting to vectorization and multitasking techniques. After a short presentation of the physical problem to solve and a description of the main difficulties to produce an efficient coding, this paper introduces the vectorization principles employed and briefly describes how the vectorized algorithm works. Next, measured performances on CRAY 1S, CYBER 205 and CRAY X-MP are compared. The second part of this paper is devoted to multitasking technique. Starting from the standard multitasking tools available with FORTRAN on CRAY X-MP/4, a multitasked algorithm and its measured speed-ups are presented. In conclusion we prove that vector and parallel computers are a great opportunity for such Monte Carlo algorithms

17. Resolving the 180-degree ambiguity in vector magnetic field measurements: The 'minimum' energy solution

Science.gov (United States)

Metcalf, Thomas R.

1994-01-01

I present a robust algorithm that resolves the 180-deg ambiguity in measurements of the solar vector magnetic field. The technique simultaneously minimizes both the divergence of the magnetic field and the electric current density using a simulated annealing algorithm. This results in the field orientation with approximately minimum free energy. The technique is well-founded physically and is simple to implement.

18. Parton-shower matching systematics in vector-boson-fusion WW production

Energy Technology Data Exchange (ETDEWEB)

Rauch, Michael [Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Plaetzer, Simon [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom)

2017-05-15

We perform a detailed analysis of next-to-leading order plus parton-shower matching in vector-boson-fusion WW production including leptonic decays. The study is performed in the Herwig 7 framework interfaced to VBFNLO 3, using the angular-ordered and dipole-based parton-shower algorithms combined with the subtractive and multiplicative-matching algorithms. (orig.)

19. Nuclear reactor fuel replacement system

International Nuclear Information System (INIS)

Kayano, Hiroyuki; Joge, Toshio.

1976-01-01

Object: To permit the direction in which a fuel replacement unit is moving to be monitored by the operator. Structure: When a fuel replacement unit approaches an intermediate goal position preset in the path of movement, renewal of data display on a goal position indicator is made every time the goal position is changed. With this renewal, the prevailing direction of movement of the fuel replacement unit can be monitored by the operator. When the control of movement is initiated, the co-ordinates of the intermediate goal point A are displayed on a goal position indicator. When the replacement unit reaches point A, the co-ordinates of the next intermediate point B are displayed, and upon reaching point B the co-ordinates of the (last) goal point C are displayed. (Nakamura, S.)

20. Slab replacement maturity guidelines : [summary].

Science.gov (United States)

2014-04-01

Concrete sets in hours at moderate temperatures, : but the bonds that make concrete strong continue : to mature over days to years. However, for : replacement concrete slabs on highways, it is : crucial that concrete develop enough strength : within ...

1. Prolonged Intermittent Renal Replacement Therapy.

Science.gov (United States)

Edrees, Fahad; Li, Tingting; Vijayan, Anitha

2016-05-01

Prolonged intermittent renal replacement therapy (PIRRT) is becoming an increasingly popular alternative to continuous renal replacement therapy in critically ill patients with acute kidney injury. There are significant practice variations in the provision of PIRRT across institutions, with respect to prescription, technology, and delivery of therapy. Clinical trials have generally demonstrated that PIRRT is non-inferior to continuous renal replacement therapy regarding patient outcomes. PIRRT offers cost-effective renal replacement therapy along with other advantages such as early patient mobilization and decreased nursing time. However, due to lack of standardization of the procedure, PIRRT still poses significant challenges, especially pertaining to appropriate drug dosing. Future guidelines and clinical trials should work toward developing consensus definitions for PIRRT and ensure optimal delivery of therapy. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

2. Video Vectorization via Tetrahedral Remeshing.

Science.gov (United States)

Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

2017-02-09

We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

3. Hyperbolic-symmetry vector fields.

Science.gov (United States)

Gao, Xu-Zhen; Pan, Yue; Cai, Meng-Qiang; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

2015-12-14

We present and construct a new kind of orthogonal coordinate system, hyperbolic coordinate system. We present and design a new kind of local linearly polarized vector fields, which is defined as the hyperbolic-symmetry vector fields because the points with the same polarization form a series of hyperbolae. We experimentally demonstrate the generation of such a kind of hyperbolic-symmetry vector optical fields. In particular, we also study the modified hyperbolic-symmetry vector optical fields with the twofold and fourfold symmetric states of polarization when introducing the mirror symmetry. The tight focusing behaviors of these vector fields are also investigated. In addition, we also fabricate micro-structures on the K9 glass surfaces by several tightly focused (modified) hyperbolic-symmetry vector fields patterns, which demonstrate that the simulated tightly focused fields are in good agreement with the fabricated micro-structures.

4. Extended vector-tensor theories

Energy Technology Data Exchange (ETDEWEB)

Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp [Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan)

2017-01-01

Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Proca theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.

5. Replacement of sub-systems

International Nuclear Information System (INIS)

Rosen, S.E.

1992-01-01

This paper describes a number of quality aspects related to replacement of important systems or components in a nuclear power station. Reference is given to the steam generator replacement and power uprating performed at Ringhals 2 in Sweden in 1989. Since quality is a wide concept there has been put special emphasis in this paper to the important aspects that traditionally are not connected to quality. (author) 1 fig

6. Optimality Conditions in Vector Optimization

CERN Document Server

Jiménez, Manuel Arana; Lizana, Antonio Rufián

2011-01-01

Vector optimization is continuously needed in several science fields, particularly in economy, business, engineering, physics and mathematics. The evolution of these fields depends, in part, on the improvements in vector optimization in mathematical programming. The aim of this Ebook is to present the latest developments in vector optimization. The contributions have been written by some of the most eminent researchers in this field of mathematical programming. The Ebook is considered essential for researchers and students in this field.

7. Symmetric vectors and algebraic classification

International Nuclear Information System (INIS)

Leibowitz, E.

1980-01-01

The concept of symmetric vector field in Riemannian manifolds, which arises in the study of relativistic cosmological models, is analyzed. Symmetric vectors are tied up with the algebraic properties of the manifold curvature. A procedure for generating a congruence of symmetric fields out of a given pair is outlined. The case of a three-dimensional manifold of constant curvature (''isotropic universe'') is studied in detail, with all its symmetric vector fields being explicitly constructed

8. Vector continued fractions using a generalized inverse

International Nuclear Information System (INIS)

Haydock, Roger; Nex, C M M; Wexler, Geoffrey

2004-01-01

A real vector space combined with an inverse (involution) for vectors is sufficient to define a vector continued fraction whose parameters consist of vector shifts and changes of scale. The choice of sign for different components of the vector inverse permits construction of vector analogues of the Jacobi continued fraction. These vector Jacobi fractions are related to vector and scalar-valued polynomial functions of the vectors, which satisfy recurrence relations similar to those of orthogonal polynomials. The vector Jacobi fraction has strong convergence properties which are demonstrated analytically, and illustrated numerically

9. Feeder replacement tooling and processes

International Nuclear Information System (INIS)

Mallozzi, R.; Goslin, R.; Pink, D.; Askari, A.

2008-01-01

Primary heat transport system feeder integrity has become a concern at some CANDU nuclear plants as a result of thinning caused by flow accelerated corrosion (FAC). Feeder inspections are indicating that life-limiting wall thinning can occur in the region between the Grayloc hub weld and second elbow of some outlet feeders. In some cases it has become necessary to replace thinned sections of affected feeders to restore feeder integrity to planned end of life. Atomic Energy of Canada Limited (AECL) and Babcock and Wilcox Canada Ltd. (B and W) have developed a new capability for replacement of single feeders at any location on the reactor face without impacting or interrupting operation of neighbouring feeders. This new capability consists of deploying trained crews with specialized tools and procedures for feeder replacements during planned outages. As may be expected, performing single feeder replacement in the congested working environment of an operational CANDU reactor face involves overcoming many challenges with respect to access to feeders, available clearances for tooling, and tooling operation and performance. This paper describes some of the challenges encountered during single feeder replacements and actions being taken by AECL and B and W to promote continuous improvement of feeder replacement tooling and processes and ensure well-executed outages. (author)

10. Chameleon vector bosons

International Nuclear Information System (INIS)

Nelson, Ann E.; Walsh, Jonathan

2008-01-01

We show that for a force mediated by a vector particle coupled to a conserved U(1) charge, the apparent range and strength can depend on the size and density of the source, and the proximity to other sources. This chameleon effect is due to screening from a light charged scalar. Such screening can weaken astrophysical constraints on new gauge bosons. As an example we consider the constraints on chameleonic gauged B-L. We show that although Casimir measurements greatly constrain any B-L force much stronger than gravity with range longer than 0.1 μm, there remains an experimental window for a long-range chameleonic B-L force. Such a force could be much stronger than gravity, and long or infinite range in vacuum, but have an effective range near the surface of the earth which is less than a micron.

11. Architecture and Vector Control

DEFF Research Database (Denmark)

von Seidlein, Lorenz; Knols, Bart GJ; Kirby, Matthew

2012-01-01

, closing of eaves and insecticide treated bednets. All of these interventions have an effect on the indoor climate. Temperature, humidity and airflow are critical for a comfortable climate. Air-conditioning and fans allow us to control indoor climate, but many people in Africa and Asia who carry the brunt...... of vector-borne diseases have no access to electricity. Many houses in the hot, humid regions of Asia have adapted to the environment, they are built of porous materials and are elevated on stilts features which allow a comfortable climate even in the presence of bednets and screens. In contrast, many...... buildings in Africa and Asia in respect to their indoor climate characteristics and finally, show how state-of-the-art 3D modelling can predict climate characteristics and help to optimize buildings....

12. Vector supersymmetric multiplets in two dimensions

International Nuclear Information System (INIS)

1990-01-01

Author.The invariance of both, N=1 supersymmetric yang-Mills theory and N-1 supersymmetric off-shell Wess-Zumino model in four dimensions is proved. Dimensional reduction is then applied to obtain super Yang-Mills theory with extended supersymmetry, N=2, in two dimensions. The resulting theory is then truncated to N=1 super Yang-Mills and with further truncation, N=1/2 supersymmetry is shown to be possible. Then, using the duality transformations, we find the off-shell supersymmetry algebra is closed and that the auxiliary fields are replaced by four-rank antisymmetric tensors with Gauge symmetry. Finally, the mechanism of dimensional reduction is then applied to obtain N=2 extended off-shell supersymmetric model with two gauge vector fields

13. DNA Minicircle Technology Improves Purity of Adeno-associated Viral Vector Preparations

Directory of Open Access Journals (Sweden)

Maria Schnödt

2016-01-01

Full Text Available Adeno-associated viral (AAV vectors are considered as one of the most promising delivery systems in human gene therapy. In addition, AAV vectors are frequently applied tools in preclinical and basic research. Despite this success, manufacturing pure AAV vector preparations remains a difficult task. While empty capsids can be removed from vector preparations owing to their lower density, state-of-the-art purification strategies as of yet failed to remove antibiotic resistance genes or other plasmid backbone sequences. Here, we report the development of minicircle (MC constructs to replace AAV vector and helper plasmids for production of both, single-stranded (ss and self-complementary (sc AAV vectors. As bacterial backbone sequences are removed during MC production, encapsidation of prokaryotic plasmid backbone sequences is avoided. This is of particular importance for scAAV vector preparations, which contained an unproportionally high amount of plasmid backbone sequences (up to 26.1% versus up to 2.9% (ssAAV. Replacing standard packaging plasmids by MC constructs not only allowed to reduce these contaminations below quantification limit, but in addition improved transduction efficiencies of scAAV preparations up to 30-fold. Thus, MC technology offers an easy to implement modification of standard AAV packaging protocols that significantly improves the quality of AAV vector preparations.

14. Vectorization of three-dimensional neutron diffusion code CITATION

International Nuclear Information System (INIS)

1985-01-01

Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)

15. Vectorization of the KENO V.a criticality safety code

International Nuclear Information System (INIS)

Hollenbach, D.F.; Dodds, H.L.; Petrie, L.M.

1991-01-01

The development of the vector processor, which is used in the current generation of supercomputers and is beginning to be used in workstations, provides the potential for dramatic speed-up for codes that are able to process data as vectors. Unfortunately, the stochastic nature of Monte Carlo codes prevents the old scalar version of these codes from taking advantage of the vector processors. New Monte Carlo algorithms that process all the histories undergoing the same event as a batch are required. Recently, new vectorized Monte Carlo codes have been developed that show significant speed-ups when compared to the scalar version of themselves or equivalent codes. This paper discusses the vectorization of an already existing and widely used criticality safety code, KENO V.a All the changes made to KENO V.a are transparent to the user making it possible to upgrade from the standard scalar version of KENO V.a to the vectorized version without learning a new code

16. [An improved algorithm for electrohysterogram envelope extraction].

Science.gov (United States)

Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

2017-02-01

Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

17. Can smartwatches replace smartphones for posture tracking?

Science.gov (United States)

Mortazavi, Bobak; Nemati, Ebrahim; VanderWall, Kristina; Flores-Rodriguez, Hector G; Cai, Jun Yu Jacinta; Lucier, Jessica; Naeim, Arash; Sarrafzadeh, Majid

2015-10-22

This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch's ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches' ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed.

18. Partial Transmit Sequence Optimization Using Improved Harmony Search Algorithm for PAPR Reduction in OFDM

Directory of Open Access Journals (Sweden)

Mangal Singh

2017-12-01

Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.

19. Ranking Support Vector Machine with Kernel Approximation

Directory of Open Access Journals (Sweden)

Kai Chen

2017-01-01

Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

20. Ranking Support Vector Machine with Kernel Approximation.

Science.gov (United States)

Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

2017-01-01

Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

1. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

International Nuclear Information System (INIS)

Baker, R.S.

1992-01-01

We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

2. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

International Nuclear Information System (INIS)

Baker, R.S.

1993-01-01

We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)

3. Development of a NEW Vector Magnetograph at Marshall Space Flight Center

Science.gov (United States)

West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)

2001-01-01

This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.

4. Prioritization methodology for chemical replacement

Science.gov (United States)

Cruit, Wendy; Goldberg, Ben; Schutzenhofer, Scott

1995-01-01

Since United States of America federal legislation has required ozone depleting chemicals (class 1 & 2) to be banned from production, The National Aeronautics and Space Administration (NASA) and industry have been required to find other chemicals and methods to replace these target chemicals. This project was initiated as a development of a prioritization methodology suitable for assessing and ranking existing processes for replacement 'urgency.' The methodology was produced in the form of a workbook (NASA Technical Paper 3421). The final workbook contains two tools, one for evaluation and one for prioritization. The two tools are interconnected in that they were developed from one central theme - chemical replacement due to imposed laws and regulations. This workbook provides matrices, detailed explanations of how to use them, and a detailed methodology for prioritization of replacement technology. The main objective is to provide a GUIDELINE to help direct the research for replacement technology. The approach for prioritization called for a system which would result in a numerical rating for the chemicals and processes being assessed. A Quality Function Deployment (QFD) technique was used in order to determine numerical values which would correspond to the concerns raised and their respective importance to the process. This workbook defines the approach and the application of the QFD matrix. This technique: (1) provides a standard database for technology that can be easily reviewed, and (2) provides a standard format for information when requesting resources for further research for chemical replacement technology. Originally, this workbook was to be used for Class 1 and Class 2 chemicals, but it was specifically designed to be flexible enough to be used for any chemical used in a process (if the chemical and/or process needs to be replaced). The methodology consists of comparison matrices (and the smaller comparison components) which allow replacement technology

5. Optimization of station battery replacement

International Nuclear Information System (INIS)

Jancauskas, J.R.; Shook, D.A.

1994-01-01

During a loss of ac power at a nuclear generating station (including diesel generators), batteries provide the source of power which is required to operate safety-related components. Because traditional lead-acid batteries have a qualified life of 20 years, the batteries must be replaced a minimum of once during a station's lifetime, twice if license extension is pursued, and more often depending on actual in-service dates and the results of surveillance tests. Replacement of batteries often occurs prior to 20 years as a result of systems changes caused by factors such as Station Blackout Regulations, control system upgrades, incremental load growth, and changes in the operating times of existing equipment. Many of these replacement decisions are based on the predictive capabilities of manual design basis calculations. The inherent conservatism of manual calculations may result in battery replacements occurring before actually required. Computerized analysis of batteries can aid in optimizing the timing of replacements as well as in interpreting service test data. Computerized analysis also provides large benefits in maintaining the as-configured load profile and corresponding design margins, while also providing the capability of quickly analyze proposed modifications and response to internal and external audits

6. Simplified Representation of Vector Fields

NARCIS (Netherlands)

Telea, Alexandru; Wijk, Jarke J. van

1999-01-01

Vector field visualization remains a difficult task. Although many local and global visualization methods for vector fields such as flow data exist, they usually require extensive user experience on setting the visualization parameters in order to produce images communicating the desired insight. We

7. Archimedeanization of ordered vector spaces

OpenAIRE

Emelyanov, Eduard Yu.

2014-01-01

In the case of an ordered vector space with an order unit, the Archimedeanization method has been developed recently by V.I Paulsen and M. Tomforde. We present a general version of the Archimedeanization which covers arbitrary ordered vector spaces.

8. Vector Radix 2 × 2 Sliding Fast Fourier Transform

Directory of Open Access Journals (Sweden)

Keun-Yung Byun

2016-01-01

Full Text Available The two-dimensional (2D discrete Fourier transform (DFT in the sliding window scenario has been successfully used for numerous applications requiring consecutive spectrum analysis of input signals. However, the results of conventional sliding DFT algorithms are potentially unstable because of the accumulated numerical errors caused by recursive strategy. In this letter, a stable 2D sliding fast Fourier transform (FFT algorithm based on the vector radix (VR 2 × 2 FFT is presented. In the VR-2 × 2 FFT algorithm, each 2D DFT bin is hierarchically decomposed into four sub-DFT bins until the size of the sub-DFT bins is reduced to 2 × 2; the output DFT bins are calculated using the linear combination of the sub-DFT bins. Because the sub-DFT bins for the overlapped input signals between the previous and current window are the same, the proposed algorithm reduces the computational complexity of the VR-2 × 2 FFT algorithm by reusing previously calculated sub-DFT bins in the sliding window scenario. Moreover, because the resultant DFT bins are identical to those of the VR-2 × 2 FFT algorithm, numerical errors do not arise; therefore, unconditional stability is guaranteed. Theoretical analysis shows that the proposed algorithm has the lowest computational requirements among the existing stable sliding DFT algorithms.

9. Algorithmic chemistry

Energy Technology Data Exchange (ETDEWEB)

Fontana, W.

1990-12-13

In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

10. Vector superconductivity in cosmic strings

International Nuclear Information System (INIS)

Dvali, G.R.; Mahajan, S.M.

1992-03-01

We argue that in most realistic cases, the usual Witten-type bosonic superconductivity of the cosmic string is automatically (independent of the existence of superconducting currents) accompanied by the condensation of charged gauge vector bosons in the core giving rise to a new vector type superconductivity. The value of the charged vector condensate is related with the charged scalar expectation value, and vanishes only if the latter goes to zero. The mechanism for the proposed vector superconductivity, differing fundamentally from those in the literature, is delineated using the simplest realistic example of the two Higgs doublet standard model interacting with the extra cosmic string. It is shown that for a wide range of parameters, for which the string becomes scalarly superconducting, W boson condensates (the sources of vector superconductivity) are necessarily excited. (author). 14 refs

11. Replacement research reactor for Australia

International Nuclear Information System (INIS)

Miller, Ross

1998-01-01

In 1992, the Australian Government commissioned a review into the need for a replacement research reactor. That review concluded that in about years, if certain conditions were met, the Government could make a decision in favour of a replacement reactor. A major milestone was achieved when, on 3 September 1997, the Australian Government announced the construction of a replacement research reactor at the site of Australia's existing research reactor HIFAR, subject to the satisfactory outcome of an environmental assessment process. The reactor will be have the dual purpose of providing a first class facility for neutron beam research as well as providing irradiation facilities for both medical isotope production and commercial irradiations. The project is scheduled for completion before the end of 2005. (author)

12. 3D Model Retrieval Based on Vector Quantisation Index Histograms

International Nuclear Information System (INIS)

Lu, Z M; Luo, H; Pan, J S

2006-01-01

This paper proposes a novel technique to retrieval 3D mesh models using vector quantisation index histograms. Firstly, points are sampled uniformly on mesh surface. Secondly, to a point five features representing global and local properties are extracted. Thus feature vectors of points are obtained. Third, we select several models from each class, and employ their feature vectors as a training set. After training using LBG algorithm, a public codebook is constructed. Next, codeword index histograms of the query model and those in database are computed. The last step is to compute the distance between histograms of the query and those of the models in database. Experimental results show the effectiveness of our method

13. Support vector machine for diagnosis cancer disease: A comparative study

Directory of Open Access Journals (Sweden)

Nasser H. Sweilam

2010-12-01

Full Text Available Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, Quantum-behave Particle Swarm for training SVM is introduced. Another approach named least square support vector machine (LSSVM and active set strategy are introduced. The obtained results by these methods are tested on a breast cancer dataset and compared with the exact solution model problem.

14. Killing tensors and conformal Killing tensors from conformal Killing vectors

International Nuclear Information System (INIS)

Rani, Raffaele; Edgar, S Brian; Barnes, Alan

2003-01-01

Koutras has proposed some methods to construct reducible proper conformal Killing tensors and Killing tensors (which are, in general, irreducible) when a pair of orthogonal conformal Killing vectors exist in a given space. We give the completely general result demonstrating that this severe restriction of orthogonality is unnecessary. In addition, we correct and extend some results concerning Killing tensors constructed from a single conformal Killing vector. A number of examples demonstrate that it is possible to construct a much larger class of reducible proper conformal Killing tensors and Killing tensors than permitted by the Koutras algorithms. In particular, by showing that all conformal Killing tensors are reducible in conformally flat spaces, we have a method of constructing all conformal Killing tensors, and hence all the Killing tensors (which will in general be irreducible) of conformally flat spaces using their conformal Killing vectors

15. Joint replacement in Zambia: A review of Hip & Knee Replacement ...

African Journals Online (AJOL)

Methods: Data captured by the different variables entered into the Joint Register covering the pre-op, intra-op and post-op period of all total hip and knee replacement surgery done at the ZIOH from 1998 to 2010 was entered into a spreadsheet after verification with individual patient medical records. This was then imported ...

16. Emerging vector borne diseases – incidence through vectors

Directory of Open Access Journals (Sweden)

Sara eSavic

2014-12-01

Full Text Available Vector borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowdays, in intercontinetal countries, there is a struggle with emerging diseases which have found their way to appear through vectors. Vector borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector borne infectious diseases and disease outbreaks. It could affect the range and popultion of pathogens, host and vectors, transmission season, etc. Reliable surveilance for diseases that are most likely to emerge is required. Canine vector borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, erlichiosis, leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fudamental role at primeraly prevention and then treatment of vector borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases.During a four year period, from 2009-2013, a total number of 551 dog samples were analysed for vector borne diseases (borreliosis, babesiosis, erlichiosis, anaplasmosis, dirofilariosis and leishmaniasis in routine laboratory work. The analysis were done by serological tests – ELISA for borreliosis, dirofilariosis and leishmaniasis, modified Knott test for dirofilariosis and blood smear for babesiosis, erlichiosis and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on avarege more then half of the samples

17. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

Science.gov (United States)

In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

18. Digital video steganalysis using motion vector recovery-based features.

Science.gov (United States)

Deng, Yu; Wu, Yunjie; Zhou, Linna

2012-07-10

As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates.

19. Optimization of Support Vector Machine (SVM) for Object Classification

Science.gov (United States)

Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin

2012-01-01

The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.

20. HST Replacement Battery Initial Performance

Science.gov (United States)

Krol, Stan; Waldo, Greg; Hollandsworth, Roger

2009-01-01

The Hubble Space Telescope (HST) original Nickel-Hydrogen (NiH2) batteries were replaced during the Servicing Mission 4 (SM4) after 19 years and one month on orbit.The purpose of this presentation is to highlight the findings from the assessment of the initial sm4 replacement battery performance. The batteries are described, the 0 C capacity is reviewed, descriptions, charts and tables reviewing the State Of Charge (SOC) Performance, the Battery Voltage Performance, the battery impedance, the minimum voltage performance, the thermal performance, the battery current, and the battery system recharge ratio,