WorldWideScience

Sample records for vector-valued random processes

  1. How random is a random vector?

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2015-01-01

    Over 80 years ago Samuel Wilks proposed that the “generalized variance” of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the “Wilks standard deviation” –the square root of the generalized variance–is indeed the standard deviation of a random vector. We further establish that the “uncorrelation index” –a derivative of the Wilks standard deviation–is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: “randomness measures” and “independence indices” of random vectors. In turn, these general notions give rise to “randomness diagrams”—tangible planar visualizations that answer the question: How random is a random vector? The notion of “independence indices” yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  2. How random is a random vector?

    Science.gov (United States)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  3. Projection correlation between two random vectors.

    Science.gov (United States)

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  4. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  5. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    1997-01-01

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  6. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  7. Statistical Theory of the Vector Random Decrement Technique

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune; Ibrahim, S. R.

    1999-01-01

    decays. Due to the speed and/or accuracy of the Vector Random Decrement technique, it was introduced as an attractive alternative to the Random Decrement technique. In this paper, the theory of the Vector Random Decrement technique is extended by applying a statistical description of the stochastic...

  8. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  9. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  10. A study of biorthogonal multiple vector-valued wavelets

    International Nuclear Information System (INIS)

    Han Jincang; Cheng Zhengxing; Chen Qingjiang

    2009-01-01

    The notion of vector-valued multiresolution analysis is introduced and the concept of biorthogonal multiple vector-valued wavelets which are wavelets for vector fields, is introduced. It is proved that, like in the scalar and multiwavelet case, the existence of a pair of biorthogonal multiple vector-valued scaling functions guarantees the existence of a pair of biorthogonal multiple vector-valued wavelet functions. An algorithm for constructing a class of compactly supported biorthogonal multiple vector-valued wavelets is presented. Their properties are investigated by means of operator theory and algebra theory and time-frequency analysis method. Several biorthogonality formulas regarding these wavelet packets are obtained.

  11. Many-body delocalization with random vector potentials

    Science.gov (United States)

    Cheng, Chen; Mondaini, Rubem

    In this talk we present the ergodic properties of excited states in a model of interacting fermions in quasi-one dimensional chains subjected to a random vector potential. In the non-interacting limit, we show that arbitrarily small values of this complex off-diagonal disorder triggers localization for the whole spectrum; the divergence of the localization length in the single particle basis is characterized by a critical exponent ν which depends on the energy density being investigated. However, when short-ranged interactions are included, the localization is lost and the system is ergodic regardless of the magnitude of disorder in finite chains. Our numerical results suggest a delocalization scheme for arbitrary small values of interactions. This finding indicates that the standard scenario of the many-body localization cannot be obtained in a model with random gauge fields. This research is financially supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. U1530401 and 11674021). RM also acknowledges support from NSFC (Grant No. 11650110441).

  12. Characterizations of the random order values by Harsanyi payoff vectors

    NARCIS (Netherlands)

    Derks, J.; van der Laan, G.; Vasil'ev, V.

    2006-01-01

    A Harsanyi payoff vector (see Vasil'ev in Optimizacija Vyp 21:30-35, 1978) of a cooperative game with transferable utilities is obtained by some distribution of the Harsanyi dividends of all coalitions among its members. Examples of Harsanyi payoff vectors are the marginal contribution vectors. The

  13. Pseudo-Random Number Generators for Vector Processors and Multicore Processors

    DEFF Research Database (Denmark)

    Fog, Agner

    2015-01-01

    Large scale Monte Carlo applications need a good pseudo-random number generator capable of utilizing both the vector processing capabilities and multiprocessing capabilities of modern computers in order to get the maximum performance. The requirements for such a generator are discussed. New ways...

  14. Vector optimization set-valued and variational analysis

    CERN Document Server

    Chen, Guang-ya; Yang, Xiaogi

    2005-01-01

    This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research

  15. Music Signal Processing Using Vector Product Neural Networks

    Science.gov (United States)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  16. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  17. Construction and decomposition of biorthogonal vector-valued wavelets with compact support

    International Nuclear Information System (INIS)

    Chen Qingjiang; Cao Huaixin; Shi Zhi

    2009-01-01

    In this article, we introduce vector-valued multiresolution analysis and the biorthogonal vector-valued wavelets with four-scale. The existence of a class of biorthogonal vector-valued wavelets with compact support associated with a pair of biorthogonal vector-valued scaling functions with compact support is discussed. A method for designing a class of biorthogonal compactly supported vector-valued wavelets with four-scale is proposed by virtue of multiresolution analysis and matrix theory. The biorthogonality properties concerning vector-valued wavelet packets are characterized with the aid of time-frequency analysis method and operator theory. Three biorthogonality formulas regarding them are presented.

  18. Mean value theorem in topological vector spaces

    International Nuclear Information System (INIS)

    Khan, L.A.

    1994-08-01

    The aim of this note is to give shorter proofs of the mean value theorem, the mean value inequality, and the mean value inclusion for the class of Gateaux differentiable functions having values in a topological vector space. (author). 6 refs

  19. On the density of the sum of two independent Student t-random vectors

    DEFF Research Database (Denmark)

    Berg, Christian; Vignat, Christophe

    2010-01-01

    -vector. In both cases the density is given as an infinite series $\\sum_{n=0}^\\infty c_nf_n$ where f_n is a sequence of probability densities on R^d and c_n is a sequence of positive numbers of sum 1, i.e. the distribution of a non-negative integer-valued random variable C, which turns out to be infinitely......In this paper, we find an expression for the density of the sum of two independent d-dimensional Student t-random vectors X and Y with arbitrary degrees of freedom. As a byproduct we also obtain an expression for the density of the sum N+X, where N is normal and X is an independent Student t...... divisible for d=1 and d=2.  When d=1 and the degrees of freedom of the Student variables are equal, we recover an old result of Ruben.  ...

  20. Ax-Kochen-Ershov principles for valued and ordered vector spaces

    OpenAIRE

    Kuhlmann, Franz-Viktor; Kuhlmann, Salma

    1997-01-01

    We study extensions of valued vector spaces with variable base field, introducing the notion of disjointness and valuation disjointness in this setting. We apply the results to determine the model theoretic properties of valued vector spaces (with variable base field) relative to that of their skeletons. We study the model theory of the skeletons in special cases. We apply the results to ordered vector spaces with compatible valuation.

  1. Experimental comparison of support vector machines with random ...

    Indian Academy of Sciences (India)

    dient method, support vector machines, and random forests to improve producer accuracy and overall classification accuracy. The performance comparison of these classifiers is valuable for a decision maker ... ping, surveillance system, resource management, tracking ... rocks, water bodies, and anthropogenic elements,.

  2. The intermittency of vector fields and random-number generators

    Science.gov (United States)

    Kalinin, A. O.; Sokoloff, D. D.; Tutubalin, V. N.

    2017-09-01

    We examine how well natural random-number generators can reproduce the intermittency phenomena that arise in the transfer of vector fields in random media. A generator based on the analysis of financial indices is suggested as the most promising random-number generator. Is it shown that even this generator, however, fails to reproduce the phenomenon long enough to confidently detect intermittency, while the C++ generator successfully solves this problem. We discuss the prospects of using shell models of turbulence as the desired generator.

  3. On efficient randomized algorithms for finding the PageRank vector

    Science.gov (United States)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  4. Fortran code for generating random probability vectors, unitaries, and quantum states

    Directory of Open Access Journals (Sweden)

    Jonas eMaziero

    2016-03-01

    Full Text Available The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.

  5. Positive solutions for a nonlocal boundary-value problem with vector-valued response

    Directory of Open Access Journals (Sweden)

    Andrzej Nowakowski

    2002-05-01

    Full Text Available Using variational methods, we study the existence of positive solutions for a nonlocal boundary-value problem with vector-valued response. We develop duality and variational principles for this problem and present a numerical version which enables the approximation of solutions and gives a measure of a duality gap between primal and dual functional for approximate solutions for this problem.

  6. On the uncertainty relations for vector-valued operators

    International Nuclear Information System (INIS)

    Chistyakov, A.L.

    1976-01-01

    In analogy with the expression for the Heisenberg incertainty principle in terms of dispersions by means of the Weyl inequality, in the case of one-dimensional quantum mechanical quantities, the principle for many-dimensional quantities can be expressed in terms of generalized dispersions and covariance matrices by means of inequalities similar to the Weyl unequality. The proofs of these inequalities are given in an abstract form, not only for the physical vector quantities, but also for arbitrary vector-valued operators with commuting self-adjoint components

  7. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  8. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

    Directory of Open Access Journals (Sweden)

    Wang Lily

    2008-07-01

    Full Text Available Abstract Background Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.

  9. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  10. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  11. A unified development of several techniques for the representation of random vectors and data sets

    Science.gov (United States)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  12. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  13. Limit theorems for functionals of Gaussian vectors

    Institute of Scientific and Technical Information of China (English)

    Hongshuai DAI; Guangjun SHEN; Lingtao KONG

    2017-01-01

    Operator self-similar processes,as an extension of self-similar processes,have been studied extensively.In this work,we study limit theorems for functionals of Gaussian vectors.Under some conditions,we determine that the limit of partial sums of functionals of a stationary Gaussian sequence of random vectors is an operator self-similar process.

  14. Boundary value problems of holomorphic vector functions in 1D QCs

    International Nuclear Information System (INIS)

    Gao Yang; Zhao Yingtao; Zhao Baosheng

    2007-01-01

    By means of the generalized Stroh formalism, two-dimensional (2D) problems of one-dimensional (1D) quasicrystals (QCs) elasticity are turned into the boundary value problems of holomorphic vector functions in a given region. If the conformal mapping from an ellipse to a circle is known, a general method for solving the boundary value problems of holomorphic vector functions can be presented. To illustrate its utility, by using the necessary and sufficient condition of boundary value problems of holomorphic vector functions, we consider two basic 2D problems in 1D QCs, that is, an elliptic hole and a rigid line inclusion subjected to uniform loading at infinity. For the crack problem, the intensity factors of phonon and phason fields are determined, and the physical sense of the results relative to phason and the difference between mechanical behaviors of the crack problem in crystals and QCs are figured out. Moreover, the same procedure can be used to deal with the elastic problems for 2D and three-dimensional (3D) QCs

  15. On the joint statistics of stable random processes

    International Nuclear Information System (INIS)

    Hopcraft, K I; Jakeman, E

    2011-01-01

    A utilitarian continuous bi-variate random process whose first-order probability density function is a stable random variable is constructed. Results paralleling some of those familiar from the theory of Gaussian noise are derived. In addition to the joint-probability density for the process, these include fractional moments and structure functions. Although the correlation functions for stable processes other than Gaussian do not exist, we show that there is coherence between values adopted by the process at different times, which identifies a characteristic evolution with time. The distribution of the derivative of the process, and the joint-density function of the value of the process and its derivative measured at the same time are evaluated. These enable properties to be calculated analytically such as level crossing statistics and those related to the random telegraph wave. When the stable process is fractal, the proportion of time it spends at zero is finite and some properties of this quantity are evaluated, an optical interpretation for which is provided. (paper)

  16. On the approximative normal values of multivalued operators in topological vector space

    International Nuclear Information System (INIS)

    Nguyen Minh Chuong; Khuat van Ninh

    1989-09-01

    In this paper the problem of approximation of normal values of multivalued linear closed operators from topological vector Mackey space into E-space is considered. Existence of normal value and convergence of approximative values to normal value are proved. (author). 4 refs

  17. Switching non-local vector median filter

    Science.gov (United States)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2016-04-01

    This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.

  18. Pipeline leakage recognition based on the projection singular value features and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Wei; Zhang, Laibin; Mingda, Wang; Jinqiu, Hu [College of Mechanical and Transportation Engineering, China University of Petroleum, Beijing, (China)

    2010-07-01

    The negative wave pressure method is one of the processes used to detect leaks on oil pipelines. The development of new leakage recognition processes is difficult because it is practically impossible to collect leakage pressure samples. The method of leakage feature extraction and the selection of the recognition model are also important in pipeline leakage detection. This study investigated a new feature extraction approach Singular Value Projection (SVP). It projects the singular value to a standard basis. A new pipeline recognition model based on the multi-class Support Vector Machines was also developed. It was found that SVP is a clear and concise recognition feature of the negative pressure wave. Field experiments proved that the model provided a high recognition accuracy rate. This approach to pipeline leakage detection based on the SVP and SVM has a high application value.

  19. Some New Lacunary Strong Convergent Vector-Valued Sequence Spaces

    OpenAIRE

    Mursaleen, M.; Alotaibi, A.; Sharma, Sunil K.

    2014-01-01

    We introduce some vector-valued sequence spaces defined by a Musielak-Orlicz function and the concepts of lacunary convergence and strong ( $A$ )-convergence, where $A=({a}_{ik})$ is an infinite matrix of complex numbers. We also make an effort to study some topological properties and some inclusion relations between these spaces.

  20. Vector-valued measure and the necessary conditions for the optimal control problems of linear systems

    International Nuclear Information System (INIS)

    Xunjing, L.

    1981-12-01

    The vector-valued measure defined by the well-posed linear boundary value problems is discussed. The maximum principle of the optimal control problem with non-convex constraint is proved by using the vector-valued measure. Especially, the necessary conditions of the optimal control of elliptic systems is derived without the convexity of the control domain and the cost function. (author)

  1. On the Stone-Weierstrass theorem for scalar and vector valued functions

    International Nuclear Information System (INIS)

    Khan, L.A.

    1991-09-01

    In this paper we discuss the formulation of the Stone-Weierstrass approximation theorem for vector-valued functions and then determine whether the classical Stone-Weierstrass theorem for scalar-valued functions can be deduced from the above one. We also state some open problems in this area. (author). 15 refs

  2. Multi-fidelity Gaussian process regression for prediction of random fields

    International Nuclear Information System (INIS)

    Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G.E.

    2017-01-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  3. Multi-fidelity Gaussian process regression for prediction of random fields

    Energy Technology Data Exchange (ETDEWEB)

    Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)

    2017-05-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  4. Some BMO estimates for vector-valued multilinear singular integral ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    the multilinear operator related to some singular integral operators is obtained. The main purpose of this paper is to establish the BMO end-point estimates for some vector-valued multilinear operators related to certain singular integral operators. First, let us introduce some notations [10,16]. Throughout this paper, Q = Q(x,r).

  5. Some New Lacunary Strong Convergent Vector-Valued Sequence Spaces

    Directory of Open Access Journals (Sweden)

    M. Mursaleen

    2014-01-01

    Full Text Available We introduce some vector-valued sequence spaces defined by a Musielak-Orlicz function and the concepts of lacunary convergence and strong (A-convergence, where A=(aik is an infinite matrix of complex numbers. We also make an effort to study some topological properties and some inclusion relations between these spaces.

  6. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  7. Isometric multipliers of a vector valued Beurling algebra on a ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 127; Issue 1. Isometric multipliers of a vector valued Beurling algebra on a discrete semigroup. Research Article Volume 127 Issue 1 February 2017 pp 109- ... Keywords. Weighted semigroup; multipliers of a semigroup; Beurling algebra; isometric multipliers.

  8. Investigating Efficiency of Vector-Valued Intensity Measures in Seismic Demand Assessment of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Mohammad Alembagheri

    2018-01-01

    Full Text Available The efficiency of vector-valued intensity measures for predicting the seismic demand in gravity dams is investigated. The Folsom gravity dam-reservoir coupled system is selected and numerically analyzed under a set of two-hundred actual ground motions. First, the well-defined scalar IMs are separately investigated, and then they are coupled to form two-parameter vector IMs. After that, IMs consisting of spectral acceleration at the first-mode natural period of the dam-reservoir system along with a measure of the spectral shape (the ratio of spectral acceleration at a second period to the first-mode spectral acceleration value are considered. It is attempted to determine the optimal second period by categorizing the spectral acceleration at the first-mode period of vibration. The efficiency of the proposed vector IMs is compared with scalar ones considering various structural responses as EDPs. Finally, the probabilistic seismic behavior of the dam is investigated by calculating its fragility curves employing scalar and vector IMs considering the effect of zero response values.

  9. Extensions of vector-valued functions with preservation of derivatives

    Czech Academy of Sciences Publication Activity Database

    Koc, M.; Kolář, Jan

    2017-01-01

    Roč. 449, č. 1 (2017), s. 343-367 ISSN 0022-247X R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : vector-valued differentiable functions * extensions * strict differentiability * partitions of unity Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X16307703

  10. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  11. Topological Vector Space-Valued Cone Metric Spaces and Fixed Point Theorems

    Directory of Open Access Journals (Sweden)

    Radenović Stojan

    2010-01-01

    Full Text Available We develop the theory of topological vector space valued cone metric spaces with nonnormal cones. We prove three general fixed point results in these spaces and deduce as corollaries several extensions of theorems about fixed points and common fixed points, known from the theory of (normed-valued cone metric spaces. Examples are given to distinguish our results from the known ones.

  12. Generalized Inferences about the Mean Vector of Several Multivariate Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Pilar Ibarrola

    2015-01-01

    Full Text Available We consider in this paper the problem of comparing the means of several multivariate Gaussian processes. It is assumed that the means depend linearly on an unknown vector parameter θ and that nuisance parameters appear in the covariance matrices. More precisely, we deal with the problem of testing hypotheses, as well as obtaining confidence regions for θ. Both methods will be based on the concepts of generalized p value and generalized confidence region adapted to our context.

  13. Probability, random variables, and random processes theory and signal processing applications

    CERN Document Server

    Shynk, John J

    2012-01-01

    Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app

  14. Simulating WTP Values from Random-Coefficient Models

    OpenAIRE

    Maurus Rischatsch

    2009-01-01

    Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...

  15. Clifford Fourier transform on vector fields.

    Science.gov (United States)

    Ebling, Julia; Scheuermann, Gerik

    2005-01-01

    Image processing and computer vision have robust methods for feature extraction and the computation of derivatives of scalar fields. Furthermore, interpolation and the effects of applying a filter can be analyzed in detail and can be advantages when applying these methods to vector fields to obtain a solid theoretical basis for feature extraction. We recently introduced the Clifford convolution, which is an extension of the classical convolution on scalar fields and provides a unified notation for the convolution of scalar and vector fields. It has attractive geometric properties that allow pattern matching on vector fields. In image processing, the convolution and the Fourier transform operators are closely related by the convolution theorem and, in this paper, we extend the Fourier transform to include general elements of Clifford Algebra, called multivectors, including scalars and vectors. The resulting convolution and derivative theorems are extensions of those for convolution and the Fourier transform on scalar fields. The Clifford Fourier transform allows a frequency analysis of vector fields and the behavior of vector-valued filters. In frequency space, vectors are transformed into general multivectors of the Clifford Algebra. Many basic vector-valued patterns, such as source, sink, saddle points, and potential vortices, can be described by a few multivectors in frequency space.

  16. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  17. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-03-01

    We consider a random walk in dimension d≥1 in a dynamic random environment evolving as an interchange process with rate γ >0 . We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1 ; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  18. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-05-01

    We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  19. Process optimization of large-scale production of recombinant adeno-associated vectors using dielectric spectroscopy.

    Science.gov (United States)

    Negrete, Alejandro; Esteban, Geoffrey; Kotin, Robert M

    2007-09-01

    A well-characterized manufacturing process for the large-scale production of recombinant adeno-associated vectors (rAAV) for gene therapy applications is required to meet current and future demands for pre-clinical and clinical studies and potential commercialization. Economic considerations argue in favor of suspension culture-based production. Currently, the only feasible method for large-scale rAAV production utilizes baculovirus expression vectors and insect cells in suspension cultures. To maximize yields and achieve reproducibility between batches, online monitoring of various metabolic and physical parameters is useful for characterizing early stages of baculovirus-infected insect cells. In this study, rAAVs were produced at 40-l scale yielding ~1 x 10(15) particles. During the process, dielectric spectroscopy was performed by real time scanning in radio frequencies between 300 kHz and 10 MHz. The corresponding permittivity values were correlated with the rAAV production. Both infected and uninfected reached a maximum value; however, only infected cell cultures permittivity profile reached a second maximum value. This effect was correlated with the optimal harvest time for rAAV production. Analysis of rAAV indicated the harvesting time around 48 h post-infection (hpi), and 72 hpi produced similar quantities of biologically active rAAV. Thus, if operated continuously, the 24-h reduction in the production process of rAAV gives sufficient time for additional 18 runs a year corresponding to an extra production of ~2 x 10(16) particles. As part of large-scale optimization studies, this new finding will facilitate the bioprocessing scale-up of rAAV and other bioproducts.

  20. Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

    KAUST Repository

    Boukaram, W.

    2015-03-25

    Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.

  1. Effects of Random Values for Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hou-Ping Dai

    2018-02-01

    Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

  2. Cross-coherent vector sensor processing for spatially distributed glider networks.

    Science.gov (United States)

    Nichols, Brendan; Sabra, Karim G

    2015-09-01

    Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.

  3. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals

    Directory of Open Access Journals (Sweden)

    Pablo Soto-Quiros

    2015-01-01

    Full Text Available This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT: the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  4. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  5. Vectorization of KENO IV code and an estimate of vector-parallel processing

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

    1986-10-01

    The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

  6. The Integration Order of Vector Autoregressive Processes

    DEFF Research Database (Denmark)

    Franchi, Massimo

    We show that the order of integration of a vector autoregressive process is equal to the difference between the multiplicity of the unit root in the characteristic equation and the multiplicity of the unit root in the adjoint matrix polynomial. The equivalence with the standard I(1) and I(2...

  7. The Effect of Macroeconomic Variables on Value-Added Agriculture: Approach of Vector Autoregresive Bayesian Model (BVAR

    Directory of Open Access Journals (Sweden)

    E. Pishbahar

    2015-05-01

    Full Text Available There are different ideas and opinions about the effects of macroeconomic variables on real and nominal variables. To answer the question of whether changes in macroeconomic variables as a political tool is useful over a business cycle, understanding the effect of macroeconomic variables on economic growth is important. In the present study, the Bayesian Vector autoregresive model and seasonality data for the years between 1991 and 2013 was used to determine the impact of monetary policy on value-added agriculture. Predicts of Vector autoregresive model are usually divertaed due to a lot of parameters in the model. Bayesian vector autoregresive model estimates more reliable predictions due to reducing the number of included parametrs and considering the former models. Compared to the Vector Autoregressive model, the coefficients are estimated more accurately. Based on the results of RMSE in this study, previous function Nrmal-Vyshart was identified as a suitable previous disteribution. According to the results of the impulse response function, the sudden effects of shocks in macroeconomic variables on the value added in agriculture and domestic venture capital are stable. The effects on the exchange rates, tax revenues and monetary will bemoderated after 7, 5 and 4periods. Monetary policy shocks ,in the first half of the year, increased the value added of agriculture, while in the second half of the year had a depressing effect on the value added.

  8. A multistage motion vector processing method for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  9. Distance covariance for stochastic processes

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady

    2017-01-01

    The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...

  10. A representation result for hysteresis operators with vector valued inputs and its application to models for magnetic materials

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Olaf, E-mail: Olaf.Klein@wias-berlin.de

    2014-02-15

    In this work, hysteresis operators mapping continuous vector-valued input functions being piecewise monotaffine, i.e. being piecewise the composition of a monotone with an affine function, to vector-valued output functions are considered. It is shown that the operator can be generated by a unique defined function on the convexity triple free strings. A formulation of a congruence property for periodic inputs is presented and reformulated as a condition for the generating string function.

  11. A Riesz Representation Theorem for the Space of Henstock Integrable Vector-Valued Functions

    Directory of Open Access Journals (Sweden)

    Tomás Pérez Becerra

    2018-01-01

    Full Text Available Using a bounded bilinear operator, we define the Henstock-Stieltjes integral for vector-valued functions; we prove some integration by parts theorems for Henstock integral and a Riesz-type theorem which provides an alternative proof of the representation theorem for real functions proved by Alexiewicz.

  12. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  13. Multifractal vector fields and stochastic Clifford algebra.

    Science.gov (United States)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2015-12-01

    In the mid 1980s, the development of multifractal concepts and techniques was an important breakthrough for complex system analysis and simulation, in particular, in turbulence and hydrology. Multifractals indeed aimed to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations or on simplified conceptual models. However, this development has been rather limited to deal with scalar fields, whereas most of the fields of interest are vector-valued or even manifold-valued. We show in this paper that the combination of stable Lévy processes with Clifford algebra is a good candidate to bridge up the present gap between theory and applications. We show that it indeed defines a convenient framework to generate multifractal vector fields, possibly multifractal manifold-valued fields, based on a few fundamental and complementary properties of Lévy processes and Clifford algebra. In particular, the vector structure of these algebra is much more tractable than the manifold structure of symmetry groups while the Lévy stability grants a given statistical universality.

  14. Multifractal vector fields and stochastic Clifford algebra

    Energy Technology Data Exchange (ETDEWEB)

    Schertzer, Daniel, E-mail: Daniel.Schertzer@enpc.fr; Tchiguirinskaia, Ioulia, E-mail: Ioulia.Tchiguirinskaia@enpc.fr [University Paris-Est, Ecole des Ponts ParisTech, Hydrology Meteorology and Complexity HM& Co, Marne-la-Vallée (France)

    2015-12-15

    In the mid 1980s, the development of multifractal concepts and techniques was an important breakthrough for complex system analysis and simulation, in particular, in turbulence and hydrology. Multifractals indeed aimed to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations or on simplified conceptual models. However, this development has been rather limited to deal with scalar fields, whereas most of the fields of interest are vector-valued or even manifold-valued. We show in this paper that the combination of stable Lévy processes with Clifford algebra is a good candidate to bridge up the present gap between theory and applications. We show that it indeed defines a convenient framework to generate multifractal vector fields, possibly multifractal manifold-valued fields, based on a few fundamental and complementary properties of Lévy processes and Clifford algebra. In particular, the vector structure of these algebra is much more tractable than the manifold structure of symmetry groups while the Lévy stability grants a given statistical universality.

  15. An inverse boundary value problem for the Schroedinger operator with vector potentials in two dimensions

    International Nuclear Information System (INIS)

    Ziqi Sun

    1993-01-01

    During the past few years a considerable interest has been focused on the inverse boundary value problem for the Schroedinger operator with a scalar (electric) potential. The popularity gained by this subject seems to be due to its connection with the inverse scattering problem at fixed energy, the inverse conductivity problem and other important inverse problems. This paper deals with an inverse boundary value problem for the Schroedinger operator with vector (electric and magnetic) potentials. As in the case of the scalar potential, results of this study would have immediate consequences in the inverse scattering problem for magnetic field at fixed energy. On the other hand, inverse boundary value problems for elliptic operators are of independent interest. The study is partly devoted to the understanding of the inverse boundary value problem for a class of general elliptic operator of second order. Note that a self-adjoint elliptic operator of second order with Δ as its principal symbol can always be written as a Schroedinger operator with vector potentials

  16. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  17. 2013 CIME Course Vector-valued Partial Differential Equations and Applications

    CERN Document Server

    Marcellini, Paolo

    2017-01-01

    Collating different aspects of Vector-valued Partial Differential Equations and Applications, this volume is based on the 2013 CIME Course with the same name which took place at Cetraro, Italy, under the scientific direction of John Ball and Paolo Marcellini. It contains the following contributions: The pullback equation (Bernard Dacorogna), The stability of the isoperimetric inequality (Nicola Fusco), Mathematical problems in thin elastic sheets: scaling limits, packing, crumpling and singularities (Stefan Müller), and Aspects of PDEs related to fluid flows (Vladimir Sverák). These lectures are addressed to graduate students and researchers in the field.

  18. Nonstationary random acoustic and electromagnetic fields as wave diffusion processes

    International Nuclear Information System (INIS)

    Arnaut, L R

    2007-01-01

    We investigate the effects of relatively rapid variations of the boundaries of an overmoded cavity on the stochastic properties of its interior acoustic or electromagnetic field. For quasi-static variations, this field can be represented as an ideal incoherent and statistically homogeneous isotropic random scalar or vector field, respectively. A physical model is constructed showing that the field dynamics can be characterized as a generalized diffusion process. The Langevin-It o-hat and Fokker-Planck equations are derived and their associated statistics and distributions for the complex analytic field, its magnitude and energy density are computed. The energy diffusion parameter is found to be proportional to the square of the ratio of the standard deviation of the source field to the characteristic time constant of the dynamic process, but is independent of the initial energy density, to first order. The energy drift vanishes in the asymptotic limit. The time-energy probability distribution is in general not separable, as a result of nonstationarity. A general solution of the Fokker-Planck equation is obtained in integral form, together with explicit closed-form solutions for several asymptotic cases. The findings extend known results on statistics and distributions of quasi-stationary ideal random fields (pure diffusions), which are retrieved as special cases

  19. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  20. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  1. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  2. Extensions of vector-valued Baire one functions with preservation of points of continuity

    Czech Academy of Sciences Publication Activity Database

    Koc, M.; Kolář, Jan

    2016-01-01

    Roč. 442, č. 1 (2016), s. 138-148 ISSN 0022-247X R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : vector-valued Baire one functions * extensions * non-tangential limit * continuity points Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1630097X

  3. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  4. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  5. Quantitative Diagnosis of Rotor Vibration Fault Using Process Power Spectrum Entropy and Support Vector Machine Method

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2014-01-01

    Full Text Available To improve the diagnosis capacity of rotor vibration fault in stochastic process, an effective fault diagnosis method (named Process Power Spectrum Entropy (PPSE and Support Vector Machine (SVM (PPSE-SVM, for short method was proposed. The fault diagnosis model of PPSE-SVM was established by fusing PPSE method and SVM theory. Based on the simulation experiment of rotor vibration fault, process data for four typical vibration faults (rotor imbalance, shaft misalignment, rotor-stator rubbing, and pedestal looseness were collected under multipoint (multiple channels and multispeed. By using PPSE method, the PPSE values of these data were extracted as fault feature vectors to establish the SVM model of rotor vibration fault diagnosis. From rotor vibration fault diagnosis, the results demonstrate that the proposed method possesses high precision, good learning ability, good generalization ability, and strong fault-tolerant ability (robustness in four aspects of distinguishing fault types, fault severity, fault location, and noise immunity of rotor stochastic vibration. This paper presents a novel method (PPSE-SVM for rotor vibration fault diagnosis and real-time vibration monitoring. The presented effort is promising to improve the fault diagnosis precision of rotating machinery like gas turbine.

  6. 3D vector distribution of the electro-magnetic fields on a random gold film

    Science.gov (United States)

    Canneson, Damien; Berini, Bruno; Buil, Stéphanie; Hermier, Jean-Pierre; Quélin, Xavier

    2018-05-01

    The 3D vector distribution of the electro-magnetic fields at the very close vicinity of the surface of a random gold film is studied. Such films are well known for their properties of light confinement and large fluctuations of local density of optical states. Using Finite-Difference Time-Domain simulations, we show that it is possible to determine the local orientation of the electro-magnetic fields. This allows us to obtain a complete characterization of the fields. Large fluctuations of their amplitude are observed as previously shown. Here, we demonstrate large variations of their direction depending both on the position on the random gold film, and on the distance to it. Such characterization could be useful for a better understanding of applications like the coupling of point-like dipoles to such films.

  7. A General Representation Theorem for Integrated Vector Autoregressive Processes

    DEFF Research Database (Denmark)

    Franchi, Massimo

    We study the algebraic structure of an I(d) vector autoregressive process, where d is restricted to be an integer. This is useful to characterize its polynomial cointegrating relations and its moving average representation, that is to prove a version of the Granger representation theorem valid...

  8. A Hartman–Nagumo inequality for the vector ordinary -Laplacian and applications to nonlinear boundary value problems

    Directory of Open Access Journals (Sweden)

    Ureña Antonio J

    2002-01-01

    Full Text Available A generalization of the well-known Hartman–Nagumo inequality to the case of the vector ordinary -Laplacian and classical degree theory provide existence results for some associated nonlinear boundary value problems.

  9. Effective Perron-Frobenius eigenvalue for a correlated random map

    Science.gov (United States)

    Pool, Roman R.; Cáceres, Manuel O.

    2010-09-01

    We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.

  10. Deep inelastic lepton-hadron processes in gauge models with massive vector gluons

    International Nuclear Information System (INIS)

    Morozov, P.T.; Stamenov, D.B.

    1978-01-01

    Considered is a class of strong interaction models in which the interactions between coloured quarks are mediated by massive neutral vector gluons. All the vector gluons acquire masses by the Higgs mechanism. These models are not asymptotically free. The effective gauge coupling constant anti α vanishes asymptotically, and the effective quartic coupling constant anti h tends to a finite asymptotic value. The behaviour of the moments of the deep inelastic lepton-hadron structure functions is analyzed. It is shown that the Bjorken scaling is violated by powers of logarithms

  11. Finding a Hadamard matrix by simulated annealing of spin vectors

    Science.gov (United States)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  12. Random processes in nuclear reactors

    CERN Document Server

    Williams, M M R

    1974-01-01

    Random Processes in Nuclear Reactors describes the problems that a nuclear engineer may meet which involve random fluctuations and sets out in detail how they may be interpreted in terms of various models of the reactor system. Chapters set out to discuss topics on the origins of random processes and sources; the general technique to zero-power problems and bring out the basic effect of fission, and fluctuations in the lifetime of neutrons, on the measured response; the interpretation of power reactor noise; and associated problems connected with mechanical, hydraulic and thermal noise sources

  13. Link-Based Similarity Measures Using Reachability Vectors

    Directory of Open Access Journals (Sweden)

    Seok-Ho Yoon

    2014-01-01

    Full Text Available We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures.

  14. Value of the future: Discounting in random environments

    Science.gov (United States)

    Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep

    2015-05-01

    We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.

  15. Monte Carlo simulation of the three-state vector Potts model on a three-dimensional random lattice

    International Nuclear Information System (INIS)

    Jianbo Zhang; Heping Ying

    1991-09-01

    We have performed a numerical simulation of the three-state vector Potts model on a three-dimensional random lattice. The averages of energy density, magnetization, specific heat and susceptibility of the system in the N 3 (N=8,10,12) lattices were calculated. The results show that a first order nature of the Z(3) symmetry breaking transition appears, as characterized by a thermal hysterisis in the energy density as well as an abrupt drop of magnetization being sharper and discontinuous with increasing of volume in the cross-over region. The results obtained on the random lattice were consistent with those obtained on the three-dimensional cubic lattice. (author). 12 refs, 4 figs

  16. Brian hears: online auditory processing using vectorization over channels.

    Science.gov (United States)

    Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain

    2011-01-01

    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

  17. A representation theory for a class of vector autoregressive models for fractional processes

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    Based on an idea of Granger (1986), we analyze a new vector autoregressive model defined from the fractional lag operator 1-(1-L)^{d}. We first derive conditions in terms of the coefficients for the model to generate processes which are fractional of order zero. We then show that if there is a un...... root, the model generates a fractional process X(t) of order d, d>0, for which there are vectors ß so that ß'X(t) is fractional of order d-b, 0...

  18. Analysis, Simulation and Prediction of Multivariate Random Fields with Package RandomFields

    Directory of Open Access Journals (Sweden)

    Martin Schlather

    2015-02-01

    Full Text Available Modeling of and inference on multivariate data that have been measured in space, such as temperature and pressure, are challenging tasks in environmental sciences, physics and materials science. We give an overview over and some background on modeling with cross- covariance models. The R package RandomFields supports the simulation, the parameter estimation and the prediction in particular for the linear model of coregionalization, the multivariate Matrn models, the delay model, and a spectrum of physically motivated vector valued models. An example on weather data is considered, illustrating the use of RandomFields for parameter estimation and prediction.

  19. A signal theoretic introduction to random processes

    CERN Document Server

    Howard, Roy M

    2015-01-01

    A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features:  A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of

  20. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  1. Measurement of K/sub NN/, K/sub LL/ in p vector d → n vector X and p vector 9Be → n vector X at 800 MeV

    International Nuclear Information System (INIS)

    Riley, P.J.; Hollas, C.L.; Newsom, C.R.

    1980-01-01

    The spin transfer parameters, K/sub NN/ and K/sub LL/, have been measured in p vector d → n vector X and p vector 9 Be → n vector X at 0 0 and 800 MeV. The rather large values of K/sub LL/ demonstrate that this transfer mechanism will provide a useful source of polarized neutrons at LAMPF energies

  2. A Campbell random process

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1993-02-01

    The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution

  3. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  4. Singular value correlation functions for products of Wishart random matrices

    International Nuclear Information System (INIS)

    Akemann, Gernot; Kieburg, Mario; Wei, Lu

    2013-01-01

    We consider the product of M quadratic random matrices with complex elements and no further symmetry, where all matrix elements of each factor have a Gaussian distribution. This generalizes the classical Wishart–Laguerre Gaussian unitary ensemble with M = 1. In this paper, we first compute the joint probability distribution for the singular values of the product matrix when the matrix size N and the number M are fixed but arbitrary. This leads to a determinantal point process which can be realized in two different ways. First, it can be written as a one-matrix singular value model with a non-standard Jacobian, or second, for M ⩾ 2, as a two-matrix singular value model with a set of auxiliary singular values and a weight proportional to the Meijer G-function. For both formulations, we determine all singular value correlation functions in terms of the kernels of biorthogonal polynomials which we explicitly construct. They are given in terms of the hypergeometric and Meijer G-functions, generalizing the Laguerre polynomials for M = 1. Our investigation was motivated from applications in telecommunication of multi-layered scattering multiple-input and multiple-output channels. We present the ergodic mutual information for finite-N for such a channel model with M − 1 layers of scatterers as an example. (paper)

  5. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2012-01-01

    such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  6. Effects of Cavity on the Performance of Dual Throat Nozzle During the Thrust-Vectoring Starting Transient Process.

    Science.gov (United States)

    Gu, Rui; Xu, Jinglei

    2014-01-01

    The dual throat nozzle (DTN) technique is capable to achieve higher thrust-vectoring efficiencies than other fluidic techniques, without compromising thrust efficiency significantly during vectoring operation. The excellent performance of the DTN is mainly due to the concaved cavity. In this paper, two DTNs of different scales have been investigated by unsteady numerical simulations to compare the parameter variations and study the effects of cavity during the vector starting process. The results remind us that during the vector starting process, dynamic loads may be generated, which is a potentially challenging problem for the aircraft trim and control.

  7. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  8. Classification of Autism Spectrum Disorder Using Random Support Vector Machine Cluster

    Directory of Open Access Journals (Sweden)

    Xia-an Bi

    2018-02-01

    Full Text Available Autism spectrum disorder (ASD is mainly reflected in the communication and language barriers, difficulties in social communication, and it is a kind of neurological developmental disorder. Most researches have used the machine learning method to classify patients and normal controls, among which support vector machines (SVM are widely employed. But the classification accuracy of SVM is usually low, due to the usage of a single SVM as classifier. Thus, we used multiple SVMs to classify ASD patients and typical controls (TC. Resting-state functional magnetic resonance imaging (fMRI data of 46 TC and 61 ASD patients were obtained from the Autism Brain Imaging Data Exchange (ABIDE database. Only 84 of 107 subjects are utilized in experiments because the translation or rotation of 7 TC and 16 ASD patients has surpassed ±2 mm or ±2°. Then the random SVM cluster was proposed to distinguish TC and ASD. The results show that this method has an excellent classification performance based on all the features. Furthermore, the accuracy based on the optimal feature set could reach to 96.15%. Abnormal brain regions could also be found, such as inferior frontal gyrus (IFG (orbital and opercula part, hippocampus, and precuneus. It is indicated that the method of random SVM cluster may apply to the auxiliary diagnosis of ASD.

  9. Probing dark matter at the LHC using vector boson fusion processes.

    Science.gov (United States)

    Delannoy, Andres G; Dutta, Bhaskar; Gurrola, Alfredo; Johns, Will; Kamon, Teruki; Luiggi, Eduardo; Melo, Andrew; Sheldon, Paul; Sinha, Kuver; Wang, Kechen; Wu, Sean

    2013-08-09

    Vector boson fusion processes at the Large Hadron Collider (LHC) provide a unique opportunity to search for new physics with electroweak couplings. A feasibility study for the search of supersymmetric dark matter in the final state of two vector boson fusion jets and large missing transverse energy is presented at 14 TeV. Prospects for determining the dark matter relic density are studied for the cases of wino and bino-Higgsino dark matter. The LHC could probe wino dark matter with mass up to approximately 600 GeV with a luminosity of 1000  fb(-1).

  10. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  11. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    Science.gov (United States)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  12. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    Science.gov (United States)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  13. Elements of random walk and diffusion processes

    CERN Document Server

    Ibe, Oliver C

    2013-01-01

    Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic

  14. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  15. Study of The Vector Product using Three Dimensions Vector Card of Engineering in Pathumwan Institute of Technology

    Science.gov (United States)

    Mueanploy, Wannapa

    2015-06-01

    The objective of this research was to offer the way to improve engineering students in Physics topic of vector product. The sampling of this research was the engineering students at Pathumwan Institute of Technology during the first semester of academic year 2013. 1) Select 120 students by random sampling are asked to fill in a satisfaction questionnaire scale, to select size of three dimensions vector card in order to apply in the classroom. 2) Select 60 students by random sampling to do achievement test and take the test to be used in the classroom. The methods used in analysis of achievement test by the Kuder-Richardson Method (KR- 20). The results show that 12 items of achievement test are appropriate to be applied in the classroom. The achievement test gets Difficulty (P) = 0.40-0.67, Discrimination = 0.33-0.73 and Reliability (r) = 0.70.The experimental in the classroom. 3) Select 60 students by random sampling divide into two groups; group one (the controlled group) with 30 students was chosen to study in the vector product lesson by the regular teaching method. Group two (the experimental group) with 30 students was chosen to learn the vector product lesson with three dimensions vector card. 4) Analyzed data between the controlled group and the experimental group, the result showed that experimental group got higher achievement test than the controlled group significant at .01 level.

  16. Process value of care safety: women's willingness to pay for perinatal services.

    Science.gov (United States)

    Anezaki, Hisataka; Hashimoto, Hideki

    2017-08-01

    To evaluate the process value of care safety from the patient's view in perinatal services. Cross-sectional survey. Fifty two sites of mandated public neonatal health checkup in 6 urban cities in West Japan. Mothers who attended neonatal health checkups for their babies in 2011 (n = 1316, response rate = 27.4%). Willingness to pay (WTP) for physician-attended care compared with midwife care as the process-related value of care safety. WTP was estimated using conjoint analysis based on the participants' choice over possible alternatives that were randomly assigned from among eight scenarios considering attributes such as professional attendance, amenities, painless delivery, caesarean section rate, travel time and price. The WTP for physician-attended care over midwife care was estimated 1283 USD. Women who had experienced complications in prior deliveries had a 1.5 times larger WTP. We empirically evaluated the process value for safety practice in perinatal care that was larger than a previously reported accounting-based value. Our results indicate that measurement of process value from the patient's view is informative for the evaluation of safety care, and that it is sensitive to individual risk perception for the care process. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  17. Smooth invariant densities for random switching on the torus

    Science.gov (United States)

    Bakhtin, Yuri; Hurth, Tobias; Lawley, Sean D.; Mattingly, Jonathan C.

    2018-04-01

    We consider a random dynamical system obtained by switching between the flows generated by two smooth vector fields on the 2d-torus, with the random switchings happening according to a Poisson process. Assuming that the driving vector fields are transversal to each other at all points of the torus and that each of them allows for a smooth invariant density and no periodic orbits, we prove that the switched system also has a smooth invariant density, for every switching rate. Our approach is based on an integration by parts formula inspired by techniques from Malliavin calculus.

  18. Linear minimax estimation for random vectors with parametric uncertainty

    KAUST Repository

    Bitar, E

    2010-06-01

    In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).

  19. Integer-valued trawl processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Lunde, Asger; Shephard, Neil

    2014-01-01

    the probabilistic properties of such processes in detail and, in addition, study volatility modulation and multivariate extensions within the new modelling framework. Moreover, we describe how the parameters of a trawl process can be estimated and obtain promising estimation results in our simulation study. Finally......This paper introduces a new continuous-time framework for modelling serially correlated count and integer-valued data. The key component in our new model is the class of integer-valued trawl processes, which are serially correlated, stationary, infinitely divisible processes. We analyse...

  20. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  1. Vector-Parallel processing of the successive overrelaxation method

    International Nuclear Information System (INIS)

    Yokokawa, Mitsuo

    1988-02-01

    Successive overrelaxation method, called SOR method, is one of iterative methods for solving linear system of equations, and it has been calculated in serial with a natural ordering in many nuclear codes. After the appearance of vector processors, this natural SOR method has been changed for the parallel algorithm such as hyperplane or red-black method, in which the calculation order is modified. These methods are suitable for vector processors, and more high-speed calculation can be obtained compared with the natural SOR method on vector processors. In this report, a new scheme named 4-colors SOR method is proposed. We find that the 4-colors SOR method can be executed on vector-parallel processors and it gives the most high-speed calculation among all SOR methods according to results of the vector-parallel execution on the Alliant FX/8 multiprocessor system. It is also shown that the theoretical optimal acceleration parameters are equal among five different ordering SOR methods, and the difference between convergence rates of these SOR methods are examined. (author)

  2. Conserved-vector-current hypothesis and the ν-baree-bar→π-π0 process

    International Nuclear Information System (INIS)

    Dubnickova, A.Z.; Dubnicka, S.; Rekalo, M.P.

    1992-01-01

    Based on the conserved-vector-current (CVC) hypothesis and a four-ρ-resonance unitary and analytic vector dominance model of the pion electromagnetic form factor, the σ tot (E ν lab ) and dσ/dE π lab of the weak ν-bar e e - →π - π 0 process are predicted theoretically for the first time. Their experimental approval could verify the CVC hypothesis for all energies above the two-pion threshold. Since, unlike the electromagnetic e + e - →π + π - process, there is no isoscalar vector-meson contribution to the weak ν-bar e e - →π - π 0 reaction, accurate measurements of the σ tot (E ν lab ) that moreover is strengthened with energy E ν lab linearly could solve the problem of the mass specification of the first excited state of the ρ (770) meson. An equality σ tot (ν-bar e e - →π - π 0 )=σ tot (e + e - →π - π 0 ) is predicted for √s≅70 GeV. 4 refs.; 5 figs

  3. Auditory detection of an increment in the rate of a random process

    International Nuclear Information System (INIS)

    Brown, W.S.; Emmerich, D.S.

    1994-01-01

    Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomly selected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer

  4. Vector superconductivity in cosmic strings

    International Nuclear Information System (INIS)

    Dvali, G.R.; Mahajan, S.M.

    1992-03-01

    We argue that in most realistic cases, the usual Witten-type bosonic superconductivity of the cosmic string is automatically (independent of the existence of superconducting currents) accompanied by the condensation of charged gauge vector bosons in the core giving rise to a new vector type superconductivity. The value of the charged vector condensate is related with the charged scalar expectation value, and vanishes only if the latter goes to zero. The mechanism for the proposed vector superconductivity, differing fundamentally from those in the literature, is delineated using the simplest realistic example of the two Higgs doublet standard model interacting with the extra cosmic string. It is shown that for a wide range of parameters, for which the string becomes scalarly superconducting, W boson condensates (the sources of vector superconductivity) are necessarily excited. (author). 14 refs

  5. Nonlinear Methodologies for Identifying Seismic Event and Nuclear Explosion Using Random Forest, Support Vector Machine, and Naive Bayes Classification

    Directory of Open Access Journals (Sweden)

    Longjun Dong

    2014-01-01

    Full Text Available The discrimination of seismic event and nuclear explosion is a complex and nonlinear system. The nonlinear methodologies including Random Forests (RF, Support Vector Machines (SVM, and Naïve Bayes Classifier (NBC were applied to discriminant seismic events. Twenty earthquakes and twenty-seven explosions with nine ratios of the energies contained within predetermined “velocity windows” and calculated distance are used in discriminators. Based on the one out cross-validation, ROC curve, calculated accuracy of training and test samples, and discriminating performances of RF, SVM, and NBC were discussed and compared. The result of RF method clearly shows the best predictive power with a maximum area of 0.975 under the ROC among RF, SVM, and NBC. The discriminant accuracies of RF, SVM, and NBC for test samples are 92.86%, 85.71%, and 92.86%, respectively. It has been demonstrated that the presented RF model can not only identify seismic event automatically with high accuracy, but also can sort the discriminant indicators according to calculated values of weights.

  6. On set-valued functionals: Multivariate risk measures and Aumann integrals

    Science.gov (United States)

    Ararat, Cagin

    In this dissertation, multivariate risk measures for random vectors and Aumann integrals of set-valued functions are studied. Both are set-valued functionals with values in a complete lattice of subsets of Rm. Multivariate risk measures are considered in a general d-asset financial market with trading opportunities in discrete time. Specifically, the following features of the market are incorporated in the evaluation of multivariate risk: convex transaction costs modeled by solvency regions, intermediate trading constraints modeled by convex random sets, and the requirement of liquidation into the first m ≤ d of the assets. It is assumed that the investor has a "pure" multivariate risk measure R on the space of m-dimensional random vectors which represents her risk attitude towards the assets but does not take into account the frictions of the market. Then, the investor with a d-dimensional position minimizes the set-valued functional R over all m-dimensional positions that she can reach by trading in the market subject to the frictions described above. The resulting functional Rmar on the space of d-dimensional random vectors is another multivariate risk measure, called the market-extension of R. A dual representation for R mar that decomposes the effects of R and the frictions of the market is proved. Next, multivariate risk measures are studied in a utility-based framework. It is assumed that the investor has a complete risk preference towards each individual asset, which can be represented by a von Neumann-Morgenstern utility function. Then, an incomplete preference is considered for multivariate positions which is represented by the vector of the individual utility functions. Under this structure, multivariate shortfall and divergence risk measures are defined as the optimal values of set minimization problems. The dual relationship between the two classes of multivariate risk measures is constructed via a recent Lagrange duality for set optimization. In

  7. Data-driven process monitoring and diagnosis with support vector data description

    OpenAIRE

    Tafazzoli Moghaddam, Esmaeil

    2011-01-01

    This thesis targets the problem of fault diagnosis of industrial processes with data-drivenapproaches. In this context, a class of problems are considered in which the only informationabout the process is in the form of data and no model is available due to complexity of theprocess. Support vector data description is a kernel based method recently proposed in the fieldof pattern recognition and it is known for its powerful capabilities in nonlinear data classificationwhich can be exploited in...

  8. Tensor renormalization group with randomized singular value decomposition

    Science.gov (United States)

    Morita, Satoshi; Igarashi, Ryo; Zhao, Hui-Hai; Kawashima, Naoki

    2018-03-01

    An algorithm of the tensor renormalization group is proposed based on a randomized algorithm for singular value decomposition. Our algorithm is applicable to a broad range of two-dimensional classical models. In the case of a square lattice, its computational complexity and memory usage are proportional to the fifth and the third power of the bond dimension, respectively, whereas those of the conventional implementation are of the sixth and the fourth power. The oversampling parameter larger than the bond dimension is sufficient to reproduce the same result as full singular value decomposition even at the critical point of the two-dimensional Ising model.

  9. The Lie Bracket of Adapted Vector Fields on Wiener Spaces

    International Nuclear Information System (INIS)

    Driver, B. K.

    1999-01-01

    Let W(M) be the based (at o element of M) path space of a compact Riemannian manifold M equipped with Wiener measure ν . This paper is devoted to considering vector fields on W(M) of the form X s h (σ )=P s (σ)h s (σ ) where P s (σ ) denotes stochastic parallel translation up to time s along a Wiener path σ element of W(M) and {h s } i sanelementof [0,1] is an adapted T o M -valued process on W(M). It is shown that there is a large class of processes h (called adapted vector fields) for which we may view X h as first-order differential operators acting on functions on W(M) . Moreover, if h and k are two such processes, then the commutator of X h with X k is again a vector field on W(M) of the same form

  10. Production ratio of pseudoscalar to vector mesons

    International Nuclear Information System (INIS)

    Chliapnikov, P.V.; Uvarov, V.A.

    1990-01-01

    The P/V ratio of directly produced pseudoscalar (P) to vector (V) mesons is analysed using the data on the K S 0 and K * (892) total inclusive cross sections in pp, π + p and K ± p reactions. The indication for a change of P/V from a value of about 1 at low energies, where the fragmentation processes dominate, to a value of 1/3, suggested by spin-statistics, at high energies is discussed. (orig.)

  11. Does the delta quench Gamow-Teller strength in (p,n)- and (p vector,p vector')-reactions

    International Nuclear Information System (INIS)

    Osterfeld, F.; Schulte, A.; Udagawa, T.; Yabe, M.

    1986-01-01

    Microscopic analyses of complete forward angle intermediate energy (p,n)-, ( 3 He,t)- and (p vector,p vector')-spin-flip spectra are presented for the reactions 90 Zr(p,n), 90 Zr( 3 He,t) and 90 Zr(p vector,p vector'). It is shown that the whole spectra up to high excitation energies (E X ∝50 MeV) are the result of correlated one-particle-one-hole (1p1h) spin-isospin transitions only. The spectra reflect, therefore, the linear spin-isospin response of the target nucleus to the probing external hadronic fields. Our results suggest that the measured (p,n)-, ( 3 He,t)- and (p vector,p vector')-cross sections are compatible with the transition strength predictions as obtained from random phase approximation (RPA) calculations. This means that the Δ isobar quenching mechanism is likely to be rather small. (orig.)

  12. Hybrid Lentivirus-transposon Vectors With a Random Integration Profile in Human Cells

    DEFF Research Database (Denmark)

    Staunstrup, Nicklas H; Moldt, Brian; Mátés, Lajos

    2009-01-01

    Gene delivery by human immunodeficiency virus type 1 (HIV-1)-based lentiviral vectors (LVs) is efficient, but genomic integration of the viral DNA is strongly biased toward transcriptionally active loci resulting in an increased risk of insertional mutagenesis in gene therapy protocols. Nonviral...... Sleeping Beauty (SB) transposon vectors have a significantly safer insertion profile, but efficient delivery into relevant cell/tissue types is a limitation. In an attempt to combine the favorable features of the two vector systems we established a novel hybrid vector technology based on SB transposase......-mediated insertion of lentiviral DNA circles generated during transduction of target cells with integrase (IN)-defective LVs (IDLVs). By construction of a lentivirus-transposon hybrid vector allowing transposition exclusively from circular viral DNA substrates, we demonstrate that SB transposase added in trans...

  13. Random Valued Impulse Noise Removal Using Region Based Detection Approach

    Directory of Open Access Journals (Sweden)

    S. Banerjee

    2017-12-01

    Full Text Available Removal of random valued noisy pixel is extremely challenging when the noise density is above 50%. The existing filters are generally not capable of eliminating such noise when density is above 70%. In this paper a region wise density based detection algorithm for random valued impulse noise has been proposed. On the basis of the intensity values, the pixels of a particular window are sorted and then stored into four regions. The higher density based region is considered for stepwise detection of noisy pixels. As a result of this detection scheme a maximum of 75% of noisy pixels can be detected. For this purpose this paper proposes a unique noise removal algorithm. It was experimentally proved that the proposed algorithm not only performs exceptionally when it comes to visual qualitative judgment of standard images but also this filter combination outsmarts the existing algorithm in terms of MSE, PSNR and SSIM comparison even up to 70% noise density level.

  14. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  15. Formulae of differentiation for solving differential equations with complex-valued random coefficients

    International Nuclear Information System (INIS)

    Kim, Ki Hong; Lee, Dong Hun

    1999-01-01

    Generalizing the work of Shapiro and Loginov, we derive new formulae of differentiation useful for solving differential equations with complex-valued random coefficients. We apply the formulae to the quantum-mechanical problem of noninteracting electrons moving in a correlated random potential in one dimension

  16. The asymptotic and exact Fisher information matrices of a vector ARMA process

    NARCIS (Netherlands)

    Klein, A.; Melard, G.; Saidi, A.

    2008-01-01

    The exact Fisher information matrix of a Gaussian vector autoregressive-moving average (VARMA) process has been considered for a time series of length N in relation to the exact maximum likelihood estimation method. In this paper it is shown that the Gaussian exact Fisher information matrix

  17. Vector continued fractions using a generalized inverse

    International Nuclear Information System (INIS)

    Haydock, Roger; Nex, C M M; Wexler, Geoffrey

    2004-01-01

    A real vector space combined with an inverse (involution) for vectors is sufficient to define a vector continued fraction whose parameters consist of vector shifts and changes of scale. The choice of sign for different components of the vector inverse permits construction of vector analogues of the Jacobi continued fraction. These vector Jacobi fractions are related to vector and scalar-valued polynomial functions of the vectors, which satisfy recurrence relations similar to those of orthogonal polynomials. The vector Jacobi fraction has strong convergence properties which are demonstrated analytically, and illustrated numerically

  18. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  19. A binomial random sum of present value models in investment analysis

    OpenAIRE

    Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος

    1997-01-01

    Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.

  20. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  1. On reflexivity of random walks in a random environment on a metric space

    International Nuclear Information System (INIS)

    Rozikov, U.A.

    2002-11-01

    In this paper, we consider random walks in random environments on a countable metric space when jumps of the walks of the fractions are finite. The transfer probabilities of the random walk from x is an element of G (where G is the considering metric space) are defined by vector p(x) is an element of R k , k>1, where {p(x), x is an element of G} is the set of independent and indentically distributed random vectors. For the random walk, a sufficient condition of nonreflexivity is obtained. Examples for metric spaces Z d free groups and free product of finite numbers cyclic groups of the second order and some other metric spaces are considered. (author)

  2. Imbalance p values for baseline covariates in randomized controlled trials: a last resort for the use of p values? A pro and contra debate.

    Science.gov (United States)

    Stang, Andreas; Baethge, Christopher

    2018-01-01

    Results of randomized controlled trials (RCTs) are usually accompanied by a table that compares covariates between the study groups at baseline. Sometimes, the investigators report p values for imbalanced covariates. The aim of this debate is to illustrate the pro and contra of the use of these p values in RCTs. Low p values can be a sign of biased or fraudulent randomization and can be used as a warning sign. They can be considered as a screening tool with low positive-predictive value. Low p values should prompt us to ask for the reasons and for potential consequences, especially in combination with hints of methodological problems. A fair randomization produces the expectation that the distribution of p values follows a flat distribution. It does not produce an expectation related to a single p value. The distribution of p values in RCTs can be influenced by the correlation among covariates, differential misclassification or differential mismeasurement of baseline covariates. Given only a small number of reported p values in the reports of RCTs, judging whether the realized p value distribution is, indeed, a flat distribution becomes difficult. If p values ≤0.005 or ≥0.995 were used as a sign of alarm, the false-positive rate would be 5.0% if randomization was done correctly, and five p values per RCT were reported. Use of a low p value as a warning sign that randomization is potentially biased can be considered a vague heuristic. The authors of this debate are obviously more or less enthusiastic with this heuristic and differ in the consequences they propose.

  3. Integrating principal component analysis and vector quantization with support vector regression for sulfur content prediction in HDS process

    Directory of Open Access Journals (Sweden)

    Shokri Saeid

    2015-01-01

    Full Text Available An accurate prediction of sulfur content is very important for the proper operation and product quality control in hydrodesulfurization (HDS process. For this purpose, a reliable data- driven soft sensors utilizing Support Vector Regression (SVR was developed and the effects of integrating Vector Quantization (VQ with Principle Component Analysis (PCA were studied on the assessment of this soft sensor. First, in pre-processing step the PCA and VQ techniques were used to reduce dimensions of the original input datasets. Then, the compressed datasets were used as input variables for the SVR model. Experimental data from the HDS setup were employed to validate the proposed integrated model. The integration of VQ/PCA techniques with SVR model was able to increase the prediction accuracy of SVR. The obtained results show that integrated technique (VQ-SVR was better than (PCA-SVR in prediction accuracy. Also, VQ decreased the sum of the training and test time of SVR model in comparison with PCA. For further evaluation, the performance of VQ-SVR model was also compared to that of SVR. The obtained results indicated that VQ-SVR model delivered the best satisfactory predicting performance (AARE= 0.0668 and R2= 0.995 in comparison with investigated models.

  4. Scattering analysis of point processes and random measures

    International Nuclear Information System (INIS)

    Hanisch, K.H.

    1984-01-01

    In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)

  5. A Computerized Approach to Trickle-Process, Random Assignment.

    Science.gov (United States)

    Braucht, G. Nicholas; Reichardt, Charles S.

    1993-01-01

    Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)

  6. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2014-01-01

    The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t

  7. Analysis of the role of homology arms in gene-targeting vectors in human cells.

    Directory of Open Access Journals (Sweden)

    Ayako Ishii

    Full Text Available Random integration of targeting vectors into the genome is the primary obstacle in human somatic cell gene targeting. Non-homologous end-joining (NHEJ, a major pathway for repairing DNA double-strand breaks, is thought to be responsible for most random integration events; however, absence of DNA ligase IV (LIG4, the critical NHEJ ligase, does not significantly reduce random integration frequency of targeting vector in human cells, indicating robust integration events occurring via a LIG4-independent mechanism. To gain insights into the mechanism and robustness of LIG4-independent random integration, we employed various types of targeting vectors to examine their integration frequencies in LIG4-proficient and deficient human cell lines. We find that the integration frequency of targeting vector correlates well with the length of homology arms and with the amount of repetitive DNA sequences, especially SINEs, present in the arms. This correlation was prominent in LIG4-deficient cells, but was also seen in LIG4-proficient cells, thus providing evidence that LIG4-independent random integration occurs frequently even when NHEJ is functionally normal. Our results collectively suggest that random integration frequency of conventional targeting vectors is substantially influenced by homology arms, which typically harbor repetitive DNA sequences that serve to facilitate LIG4-independent random integration in human cells, regardless of the presence or absence of functional NHEJ.

  8. Method of dynamic fuzzy symptom vector in intelligent diagnosis

    International Nuclear Information System (INIS)

    Sun Hongyan; Jiang Xuefeng

    2010-01-01

    Aiming at the requirement of diagnostic symptom real-time updating brought from diagnostic knowledge accumulation and great gap in unit and value of diagnostic symptom in multi parameters intelligent diagnosis, the method of dynamic fuzzy symptom vector is proposed. The concept of dynamic fuzzy symptom vector is defined. Ontology is used to specify the vector elements, and the vector transmission method based on ontology is built. The changing law of symptom value is analyzed and fuzzy normalization method based on fuzzy membership functions is built. An instance proved method of dynamic fussy symptom vector is efficient to solve the problems of symptom updating and unify of symptom value and unit. (authors)

  9. A random matrix approach to VARMA processes

    International Nuclear Information System (INIS)

    Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata

    2010-01-01

    We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.

  10. Search for intermediate vector bosons

    International Nuclear Information System (INIS)

    Klajn, D.B.; Rubbia, K.; Meer, S.

    1983-01-01

    Problem of registration and search for intermediate vector bosons is discussed. According to weak-current theory there are three intermediate vector bosons with +1(W + )-1(W - ) and zero (Z 0 ) electric charges. It was suggested to conduct the investigation into particles in 1976 by cline, Rubbia and Makintair using proton-antiproton beams. Major difficulties of the experiment are related to the necessity of formation of sufficient amount of antiparticles and the method of antiproton beam ''cooling'' for the purpose of reduction of its random movements. The stochastic method was suggested by van der Meer in 1968 as one of possible cooling methods. Several large detectors were designed for searching intermediate vector bosons

  11. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  12. Production of lentiviral vectors

    Directory of Open Access Journals (Sweden)

    Otto-Wilhelm Merten

    2016-01-01

    Full Text Available Lentiviral vectors (LV have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented.

  13. Set-valued and fuzzy stochastic integral equations driven by semimartingales under Osgood condition

    Directory of Open Access Journals (Sweden)

    Malinowski Marek T.

    2015-01-01

    Full Text Available We analyze the set-valued stochastic integral equations driven by continuous semimartingales and prove the existence and uniqueness of solutions to such equations in the framework of the hyperspace of nonempty, bounded, convex and closed subsets of the Hilbert space L2 (consisting of square integrable random vectors. The coefficients of the equations are assumed to satisfy the Osgood type condition that is a generalization of the Lipschitz condition. Continuous dependence of solutions with respect to data of the equation is also presented. We consider equations driven by semimartingale Z and equations driven by processes A;M from decomposition of Z, where A is a process of finite variation and M is a local martingale. These equations are not equivalent. Finally, we show that the analysis of the set-valued stochastic integral equations can be extended to a case of fuzzy stochastic integral equations driven by semimartingales under Osgood type condition. To obtain our results we use the set-valued and fuzzy Maruyama type approximations and Bihari’s inequality.

  14. Light scattering of rectangular slot antennas: parallel magnetic vector vs perpendicular electric vector

    Science.gov (United States)

    Lee, Dukhyung; Kim, Dai-Sik

    2016-01-01

    We study light scattering off rectangular slot nano antennas on a metal film varying incident polarization and incident angle, to examine which field vector of light is more important: electric vector perpendicular to, versus magnetic vector parallel to the long axis of the rectangle. While vector Babinet’s principle would prefer magnetic field along the long axis for optimizing slot antenna function, convention and intuition most often refer to the electric field perpendicular to it. Here, we demonstrate experimentally that in accordance with vector Babinet’s principle, the incident magnetic vector parallel to the long axis is the dominant component, with the perpendicular incident electric field making a small contribution of the factor of 1/|ε|, the reciprocal of the absolute value of the dielectric constant of the metal, owing to the non-perfectness of metals at optical frequencies.

  15. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  16. A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications

    Directory of Open Access Journals (Sweden)

    Arnaud Marsiglietti

    2018-03-01

    Full Text Available We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d ( x , x ^ = | x − x ^ | r , with r ≥ 1 , and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log ( π e ≈ 1 . 5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log ( π e 2 ≈ 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log ( π e 2 ≈ 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry.

  17. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  18. Marolo (Annona crassiflora Mart.: a study of value chain and processing

    Directory of Open Access Journals (Sweden)

    Síntia Carla Corrêa

    2013-06-01

    Full Text Available This article aims to discuss the needs and problems of marolo value chain, as well as to evaluate the rehydration process of this fruit as a possibility of using it as a by-product during the interharvest growth periods. The study of the value chain included interviews with producers, handlers, and fruit and by-product sellers. In order to evaluate the rehydration process of this fruit, marolo was dehydrated using a conventional procedure and freeze-drying. The experiments were conducted in a completely randomized design and a triple factorial scheme (2 × 2 × 6. ANOVA was performed, followed by the Tukey's test (p < 0.05. Regression models were generated and adjusted for the time factor. The precariousness of the value chain of marolo was observed. The best procedure for marolo dehydration should be determined according to the intended use of the dehydrated product since the water-absorption capacity of the flour is higher and convective hot-air-drying is more effective in retaining soluble solids and reducing damage to the fruit. These results aim at contributing to the marolo value chain and to the preservation of native trees in the Brazilian savanna biome and can be used to analyze other underutilized crops.

  19. Quark-gluon plasma tomography by vector mesons

    International Nuclear Information System (INIS)

    Lovas, I.; Schram, Zs.; Csernai, L.P.; Hungarian Academy of Sciences, Budapest; Nyiri, A.

    2001-01-01

    The fireball formed in a heavy ion collision is characterized by the impact parameter vector b-vector, which can be determined from the multiplicity and the angular distribution of the reaction products. By appropriate rotations the b-vector vectors of each collision can be aligned into a fixed direction. Using the measured values of the momentum distributions independent integral equations can be formulated for the unknown emission densities (E M (r-vector)) and for the unknown absorption densities (Δμ(r-vector)) of the different vector mesons. (author)

  20. Discrete random signal processing and filtering primer with Matlab

    CERN Document Server

    Poularikas, Alexander D

    2013-01-01

    Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe

  1. Deterministic multivalued logic scheme for information processing and routing in the brain

    International Nuclear Information System (INIS)

    Bezrukov, Sergey M.; Kish, Laszlo B.

    2009-01-01

    Driven by analogies with state vectors of quantum informatics and noise-based logic, we propose a general scheme and elements of neural circuitry for processing and addressing information in the brain. Specifically, we consider random (e.g., Poissonian) trains of finite-duration spikes, and, using the idealized concepts of excitatory and inhibitory synapses, offer a procedure for generating 2 N -1 orthogonal vectors out of N partially overlapping trains ('neuro-bits'). We then show that these vectors can be used to construct 2 2 N -1 -1 different superpositions which represent the same number of logic values when carrying or routing information. In quantum informatics the above numbers are the same, however, the present logic scheme is more advantageous because it is deterministic in the sense that the presence of a vector in the spike train is detected by an appropriate coincidence circuit. For this reason it does not require time averaging or repeated measurements of the kind used in standard cross-correlation analysis or in quantum computing.

  2. Deterministic multivalued logic scheme for information processing and routing in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Bezrukov, Sergey M. [Laboratory of Physical and Structural Biology, Program in Physical Biology, NICHD, National Institutes of Health, Bethesda, MD 20892 (United States); Kish, Laszlo B., E-mail: laszlo.kish@ece.tamu.ed [Department of Electrical and Computer Engineering, Texas A and M University, Mailstop 3128, College Station, 77843-3128 TX (United States)

    2009-06-22

    Driven by analogies with state vectors of quantum informatics and noise-based logic, we propose a general scheme and elements of neural circuitry for processing and addressing information in the brain. Specifically, we consider random (e.g., Poissonian) trains of finite-duration spikes, and, using the idealized concepts of excitatory and inhibitory synapses, offer a procedure for generating 2{sup N}-1 orthogonal vectors out of N partially overlapping trains ('neuro-bits'). We then show that these vectors can be used to construct 2{sup 2N-1}-1 different superpositions which represent the same number of logic values when carrying or routing information. In quantum informatics the above numbers are the same, however, the present logic scheme is more advantageous because it is deterministic in the sense that the presence of a vector in the spike train is detected by an appropriate coincidence circuit. For this reason it does not require time averaging or repeated measurements of the kind used in standard cross-correlation analysis or in quantum computing.

  3. Decays of the vector glueball

    Science.gov (United States)

    Giacosa, Francesco; Sammet, Julia; Janowski, Stanislaus

    2017-06-01

    We calculate two- and three-body decays of the (lightest) vector glueball into (pseudo)scalar, (axial-)vector, as well as pseudovector and excited vector mesons in the framework of a model of QCD. While absolute values of widths cannot be predicted because the corresponding coupling constants are unknown, some interesting branching ratios can be evaluated by setting the mass of the yet hypothetical vector glueball to 3.8 GeV as predicted by quenched lattice QCD. We find that the decay mode ω π π should be one of the largest (both through the decay chain O →b1π →ω π π and through the direct coupling O →ω π π ). Similarly, the (direct and indirect) decay into π K K*(892 ) is sizable. Moreover, the decays into ρ π and K*(892 )K are, although subleading, possible and could play a role in explaining the ρ π puzzle of the charmonium state ψ (2 S ) thanks to a (small) mixing with the vector glueball. The vector glueball can be directly formed at the ongoing BESIII experiment as well as at the future PANDA experiment at the FAIR facility. If the width is sufficiently small (≲100 MeV ) it should not escape future detection. It should be stressed that the employed model is based on some inputs and simplifying assumptions: the value of glueball mass (at present, the quenched lattice value is used), the lack of mixing of the glueball with other quarkonium states, and the use of few interaction terms. It then represents a first step toward the identification of the main decay channels of the vector glueball, but shall be improved when corresponding experimental candidates and/or new lattice results will be available.

  4. Valuing the Accreditation Process

    Science.gov (United States)

    Bahr, Maria

    2018-01-01

    The value of the National Association for Developmental Education (NADE) accreditation process is far-reaching. Not only do students and programs benefit from the process, but also the entire institution. Through data collection of student performance, analysis, and resulting action plans, faculty and administrators can work cohesively towards…

  5. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  6. Noise reduction by support vector regression with a Ricker wavelet kernel

    International Nuclear Information System (INIS)

    Deng, Xiaoying; Yang, Dinghui; Xie, Jing

    2009-01-01

    We propose a noise filtering technology based on the least-squares support vector regression (LS-SVR), to improve the signal-to-noise ratio (SNR) of seismic data. We modified it by using an admissible support vector (SV) kernel, namely the Ricker wavelet kernel, to replace the conventional radial basis function (RBF) kernel in seismic data processing. We investigated the selection of the regularization parameter for the LS-SVR and derived a concise selecting formula directly from the noisy data. We used the proposed method for choosing the regularization parameter which not only had the advantage of high speed but could also obtain almost the same effectiveness as an optimal parameter method. We conducted experiments using synthetic data corrupted by the random noise of different types and levels, and found that our method was superior to the wavelet transform-based approach and the Wiener filtering. We also applied the method to two field seismic data sets and concluded that it was able to effectively suppress the random noise and improve the data quality in terms of SNR

  7. Noise reduction by support vector regression with a Ricker wavelet kernel

    Science.gov (United States)

    Deng, Xiaoying; Yang, Dinghui; Xie, Jing

    2009-06-01

    We propose a noise filtering technology based on the least-squares support vector regression (LS-SVR), to improve the signal-to-noise ratio (SNR) of seismic data. We modified it by using an admissible support vector (SV) kernel, namely the Ricker wavelet kernel, to replace the conventional radial basis function (RBF) kernel in seismic data processing. We investigated the selection of the regularization parameter for the LS-SVR and derived a concise selecting formula directly from the noisy data. We used the proposed method for choosing the regularization parameter which not only had the advantage of high speed but could also obtain almost the same effectiveness as an optimal parameter method. We conducted experiments using synthetic data corrupted by the random noise of different types and levels, and found that our method was superior to the wavelet transform-based approach and the Wiener filtering. We also applied the method to two field seismic data sets and concluded that it was able to effectively suppress the random noise and improve the data quality in terms of SNR.

  8. Pseudo random signal processing theory and application

    CERN Document Server

    Zepernick, Hans-Jurgen

    2013-01-01

    In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications

  9. Magnetic vector field tag and seal

    Science.gov (United States)

    Johnston, Roger G.; Garcia, Anthony R.

    2004-08-31

    One or more magnets are placed in a container (preferably on objects inside the container) and the magnetic field strength and vector direction are measured with a magnetometer from at least one location near the container to provide the container with a magnetic vector field tag and seal. The location(s) of the magnetometer relative to the container are also noted. If the position of any magnet inside the container changes, then the measured vector fields at the these locations also change, indicating that the tag has been removed, the seal has broken, and therefore that the container and objects inside may have been tampered with. A hollow wheel with magnets inside may also provide a similar magnetic vector field tag and seal. As the wheel turns, the magnets tumble randomly inside, removing the tag and breaking the seal.

  10. A successive order of scattering model for solving vector radiative transfer in the atmosphere

    International Nuclear Information System (INIS)

    Min Qilong; Duan Minzheng

    2004-01-01

    A full vector radiative transfer model for vertically inhomogeneous plane-parallel media has been developed by using the successive order of scattering approach. In this model, a fast analytical expansion of Fourier decomposition is implemented and an exponent-linear assumption is used for vertical integration. An analytic angular interpolation method of post-processing source function is also implemented to accurately interpolate the Stokes vector at arbitrary angles for a given solution. It has been tested against the benchmarks for the case of randomly orientated oblate spheroids, illustrating a good agreement for each stokes vector (within 0.01%). Sensitivity tests have been conducted to illustrate the accuracy of vertical integration and angle interpolation approaches. The contribution of each scattering order for different optical depths and single scattering albedos are also analyzed

  11. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  12. Evolutionary formalism from random Leslie matrices in biology

    International Nuclear Information System (INIS)

    Caceres, M.O.; Caceres-Saez, I.

    2008-07-01

    We present a perturbative formalism to deal with linear random matrix difference equations. We generalize the concept of the population growth rate when a Leslie matrix has random elements (i.e., characterizing the disorder in the vital parameters). The dominant eigenvalue of which defines the asymptotic dynamics of the mean value population vector state, is presented as the effective growth rate of a random Leslie model. This eigenvalue is calculated from the largest positive root of a secular polynomial. Analytical (exact and perturbative calculations) results are presented for several models of disorder. A 3 x 3 numerical example is applied to study the effective growth rate characterizing the long-time dynamics of a population biological case: the Tursiops sp. (author)

  13. Solution-Processed Carbon Nanotube True Random Number Generator.

    Science.gov (United States)

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  14. Scaling behaviour of randomly alternating surface growth processes

    International Nuclear Information System (INIS)

    Raychaudhuri, Subhadip; Shapir, Yonathan

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depend on the timing of the applications. The analytical results are supported by numerical simulations of various pairs of primary processes and with different distribution functions. Self-affine surfaces grown by two randomly alternating processes are common in nature (e.g., due to randomly changing weather conditions) and in man-made devices such as rechargeable batteries

  15. The Impact of Using Randomized Homework Values on Student Learning

    Science.gov (United States)

    Berardi, Victor

    2011-01-01

    Much of the recent research on homework focuses on using online, web-based, or computerized homework systems. These systems have many reported capabilities and benefits, including the ability to randomize values, which enables multiple attempts by a student or to reduce academic dishonesty. This study reports on the impact of using randomized…

  16. CRITICAL EVALUATION OF THE EFFECTIVENESS OF SEWAGE SLUDGE DISINFECTION AND VECTOR ATTRACTION REDUCTION PROCESSES

    Science.gov (United States)

    What is the current state of management practices for biosolids production and application, and how can those be made more effective? How effective are Class B disinfection and vector attraction processes, and public access and harvesting restrictions at reducing the public's exp...

  17. Monitoring by Use of Clusters of Sensor-Data Vectors

    Science.gov (United States)

    Iverson, David L.

    2007-01-01

    The inductive monitoring system (IMS) is a system of computer hardware and software for automated monitoring of the performance, operational condition, physical integrity, and other aspects of the health of a complex engineering system (e.g., an industrial process line or a spacecraft). The input to the IMS consists of streams of digitized readings from sensors in the monitored system. The IMS determines the type and amount of any deviation of the monitored system from a nominal or normal ( healthy ) condition on the basis of a comparison between (1) vectors constructed from the incoming sensor data and (2) corresponding vectors in a database of nominal or normal behavior. The term inductive reflects the use of a process reminiscent of traditional mathematical induction to learn about normal operation and build the nominal-condition database. The IMS offers two major advantages over prior computational monitoring systems: The computational burden of the IMS is significantly smaller, and there is no need for abnormal-condition sensor data for training the IMS to recognize abnormal conditions. The figure schematically depicts the relationships among the computational processes effected by the IMS. Training sensor data are gathered during normal operation of the monitored system, detailed computational simulation of operation of the monitored system, or both. The training data are formed into vectors that are used to generate the database. The vectors in the database are clustered into regions that represent normal or nominal operation. Once the database has been generated, the IMS compares the vectors of incoming sensor data with vectors representative of the clusters. The monitored system is deemed to be operating normally or abnormally, depending on whether the vector of incoming sensor data is or is not, respectively, sufficiently close to one of the clusters. For this purpose, a distance between two vectors is calculated by a suitable metric (e.g., Euclidean

  18. A vector matching method for analysing logic Petri nets

    Science.gov (United States)

    Du, YuYue; Qi, Liang; Zhou, MengChu

    2011-11-01

    Batch processing function and passing value indeterminacy in cooperative systems can be described and analysed by logic Petri nets (LPNs). To directly analyse the properties of LPNs, the concept of transition enabling vector sets is presented and a vector matching method used to judge the enabling transitions is proposed in this article. The incidence matrix of LPNs is defined; an equation about marking change due to a transition's firing is given; and a reachable tree is constructed. The state space explosion is mitigated to a certain extent from directly analysing LPNs. Finally, the validity and reliability of the proposed method are illustrated by an example in electronic commerce.

  19. A random-matrix theory of the number sense.

    Science.gov (United States)

    Hannagan, T; Nieder, A; Viswanathan, P; Dehaene, S

    2017-02-19

    Number sense, a spontaneous ability to process approximate numbers, has been documented in human adults, infants and newborns, and many other animals. Species as distant as monkeys and crows exhibit very similar neurons tuned to specific numerosities. How number sense can emerge in the absence of learning or fine tuning is currently unknown. We introduce a random-matrix theory of self-organized neural states where numbers are coded by vectors of activation across multiple units, and where the vector codes for successive integers are obtained through multiplication by a fixed but random matrix. This cortical implementation of the 'von Mises' algorithm explains many otherwise disconnected observations ranging from neural tuning curves in monkeys to looking times in neonates and cortical numerotopy in adults. The theory clarifies the origin of Weber-Fechner's Law and yields a novel and empirically validated prediction of multi-peak number neurons. Random matrices constitute a novel mechanism for the emergence of brain states coding for quantity.This article is part of a discussion meeting issue 'The origins of numerical abilities'. © 2017 The Author(s).

  20. Real-time definition of non-randomness in the distribution of genomic events.

    Directory of Open Access Journals (Sweden)

    Ulrich Abel

    Full Text Available Features such as mutations or structural characteristics can be non-randomly or non-uniformly distributed within a genome. So far, computer simulations were required for statistical inferences on the distribution of sequence motifs. Here, we show that these analyses are possible using an analytical, mathematical approach. For the assessment of non-randomness, our calculations only require information including genome size, number of (sampled sequence motifs and distance parameters. We have developed computer programs evaluating our analytical formulas for the real-time determination of expected values and p-values. This approach permits a flexible cluster definition that can be applied to most effectively identify non-random or non-uniform sequence motif distribution. As an example, we show the effectivity and reliability of our mathematical approach in clinical retroviral vector integration site distribution.

  1. Noncommutative and vector-valued Rosenthal inequalities

    NARCIS (Netherlands)

    Dirksen, S.

    2011-01-01

    This thesis is dedicated to the study of a class of probabilistic inequalities, called Rosenthal inequalities. These inequalities provide two-sided estimates for the p-th moments of the sum of a sequence of independent, mean zero random variables in terms of a suitable norm on the sequence itself.

  2. Structuring Stokes correlation functions using vector-vortex beam

    Science.gov (United States)

    Kumar, Vijay; Anwar, Ali; Singh, R. P.

    2018-01-01

    Higher order statistical correlations of the optical vector speckle field, formed due to scattering of a vector-vortex beam, are explored. Here, we report on the experimental construction of the Stokes parameters covariance matrix, consisting of all possible spatial Stokes parameters correlation functions. We also propose and experimentally realize a new Stokes correlation functions called Stokes field auto correlation functions. It is observed that the Stokes correlation functions of the vector-vortex beam will be reflected in the respective Stokes correlation functions of the corresponding vector speckle field. The major advantage of proposing Stokes correlation functions is that the Stokes correlation function can be easily tuned by manipulating the polarization of vector-vortex beam used to generate vector speckle field and to get the phase information directly from the intensity measurements. Moreover, this approach leads to a complete experimental Stokes characterization of a broad range of random fields.

  3. Calculus with vectors

    CERN Document Server

    Treiman, Jay S

    2014-01-01

    Calculus with Vectors grew out of a strong need for a beginning calculus textbook for undergraduates who intend to pursue careers in STEM. fields. The approach introduces vector-valued functions from the start, emphasizing the connections between one-variable and multi-variable calculus. The text includes early vectors and early transcendentals and includes a rigorous but informal approach to vectors. Examples and focused applications are well presented along with an abundance of motivating exercises. All three-dimensional graphs have rotatable versions included as extra source materials and may be freely downloaded and manipulated with Maple Player; a free Maple Player App is available for the iPad on iTunes. The approaches taken to topics such as the derivation of the derivatives of sine and cosine, the approach to limits, and the use of "tables" of integration have been modified from the standards seen in other textbooks in order to maximize the ease with which students may comprehend the material. Additio...

  4. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  5. Vector boson scattering at CLIC

    Energy Technology Data Exchange (ETDEWEB)

    Kilian, Wolfgang; Fleper, Christian [Department Physik, Universitaet Siegen, 57068 Siegen (Germany); Reuter, Juergen [DESY Theory Group, 22603 Hamburg (Germany); Sekulla, Marco [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie, 76131 Karlsruhe (Germany)

    2016-07-01

    Linear colliders operating in a range of multiple TeV are able to investigate the details of vector boson scattering and electroweak symmetry breaking. We calculate cross sections with the Monte Carlo generator WHIZARD for vector boson scattering processes at the future linear e{sup +} e{sup -} collider CLIC. By finding suitable cuts, the vector boson scattering signal processes are isolated from the background. Finally, we are able to determine exclusion sensitivities on the non-Standard Model parameters of the relevant dimension eight operators.

  6. VECTOR INTEGRATION

    NARCIS (Netherlands)

    Thomas, E. G. F.

    2012-01-01

    This paper deals with the theory of integration of scalar functions with respect to a measure with values in a, not necessarily locally convex, topological vector space. It focuses on the extension of such integrals from bounded measurable functions to the class of integrable functions, proving

  7. New large-deviation local theorems for sums of independent and identically distributed random vectors when the limit distribution is α-stable

    OpenAIRE

    Nagaev, Alexander; Zaigraev, Alexander

    2005-01-01

    A class of absolutely continuous distributions in Rd is considered. Each distribution belongs to the domain of normal attraction of an α-stable law. The limit law is characterized by a spectral measure which is absolutely continuous with respect to the spherical Lebesgue measure. The large-deviation problem for sums of independent and identically distributed random vectors when the underlying distribution belongs to that class is studied. At the focus of attention are the deviations in the di...

  8. Modeling and Compensation of Random Drift of MEMS Gyroscopes Based on Least Squares Support Vector Machine Optimized by Chaotic Particle Swarm Optimization.

    Science.gov (United States)

    Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng

    2017-10-13

    MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.

  9. Properties of asymmetry of the electro disintegration process with vector-polarized deuterons

    CERN Document Server

    Rekalo, M P; Rekalo, O P

    2002-01-01

    The properties of the asymmetry A sub y (theta) in the exclusive electro disintegration of vector-polarized deuteron d-vector (e, e' p)n have been investigated (the vector of the target polarization is directed perpendicularly to the plane of the reaction gamma sup * + d-vector -> n + p). All calculations have been done in the framework of relativistic impulse approximation with the unitarized multipole gamma sup * + d-vector -> n + p amplitudes in order to account the final-state NN interaction in the reaction d-vector (e, e' p)n. The significance of various mechanisms in the formation of the angular dependence of the asymmetry A sub y (theta) has been discussed for the complanar kinematical conditions.

  10. Quasiperiodicity in time evolution of the Bloch vector under the thermal Jaynes-Cummings model

    Science.gov (United States)

    Azuma, Hiroo; Ban, Masashi

    2014-07-01

    We study a quasiperiodic structure in the time evolution of the Bloch vector, whose dynamics is governed by the thermal Jaynes-Cummings model (JCM). Putting the two-level atom into a certain pure state and the cavity field into a mixed state in thermal equilibrium at initial time, we let the whole system evolve according to the JCM Hamiltonian. During this time evolution, motion of the Bloch vector seems to be in disorder. Because of the thermal photon distribution, both a norm and a direction of the Bloch vector change hard at random. In this paper, taking a different viewpoint compared with ones that we have been used to, we investigate quasiperiodicity of the Bloch vector’s trajectories. Introducing the concept of the quasiperiodic motion, we can explain the confused behaviour of the system as an intermediate state between periodic and chaotic motions. More specifically, we discuss the following two facts: (1) If we adjust the time interval Δt properly, figures consisting of plotted dots at the constant time interval acquire scale invariance under replacement of Δt by sΔt, where s(>1) is an arbitrary real but not transcendental number. (2) We can compute values of the time variable t, which let |Sz(t)| (the absolute value of the z-component of the Bloch vector) be very small, with the Diophantine approximation (a rational approximation of an irrational number).

  11. Comparison of confirmed inactive and randomly selected compounds as negative training examples in support vector machine-based virtual screening.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2013-07-22

    The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomly selected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

  12. Computation of covex bounds for present value functions with random payments

    NARCIS (Netherlands)

    Ahcan, A.; Darkiewicz, G.; Goovaerts, M.J.; Hoedemakers, T.

    2006-01-01

    In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by

  13. Selection vector filter framework

    Science.gov (United States)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  14. Vectorization of three-dimensional neutron diffusion code CITATION

    International Nuclear Information System (INIS)

    Harada, Hiroo; Ishiguro, Misako

    1985-01-01

    Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)

  15. Transversals of Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    Vector fields in the complex plane are defined by assigning the vector determined by the value P(z) to each point z in the complex plane, where P is a polynomial of one complex variable. We consider special families of so-called rotated vector fields that are determined by a polynomial multiplied...... by rotational constants. Transversals are a certain class of curves for such a family of vector fields that represent the bifurcation states for this family of vector fields. More specifically, transversals are curves that coincide with a homoclinic separatrix for some rotation of the vector field. Given...... a concrete polynomial, it seems to take quite a bit of work to prove that it is generic, i.e. structurally stable. This has been done for a special class of degree d polynomial vector fields having simple equilibrium points at the d roots of unity, d odd. In proving that such vector fields are generic...

  16. Multivariate fractional Poisson processes and compound sums

    OpenAIRE

    Beghin, Luisa; Macci, Claudio

    2015-01-01

    In this paper we present multivariate space-time fractional Poisson processes by considering common random time-changes of a (finite-dimensional) vector of independent classical (non-fractional) Poisson processes. In some cases we also consider compound processes. We obtain some equations in terms of some suitable fractional derivatives and fractional difference operators, which provides the extension of known equations for the univariate processes.

  17. Screening vector field modifications of general relativity

    International Nuclear Information System (INIS)

    Beltrán Jiménez, Jose; Delvas Fróes, André Luís; Mota, David F.

    2013-01-01

    A screening mechanism for conformal vector–tensor modifications of general relativity is proposed. The conformal factor depends on the norm of the vector field and makes the field to vanish in high dense regions, whereas drives it to a non-null value in low density environments. Such process occurs due to a spontaneous symmetry breaking mechanism and gives rise to both the screening of fifth forces as well as Lorentz violations. The cosmology and local constraints are also computed

  18. Competing Values in Software Process Improvement

    DEFF Research Database (Denmark)

    Mûller, Sune Dueholm; Nielsen, Peter Axel

    2013-01-01

    Purpose The purpose of the article is to investigate the impact of organizational culture on software process improvement (SPI). Is cultural congruence between an organization and an adopted process model required? How can the level of congruence between an organizational culture and the values...... and assumptions underlying an adopted process model be assessed? How can cultural incongruence be managed to facilitate success of software process improvement? Design/methodology/approach The competing values framework and its associated assessment instrument are used in a case study to establish......-step process, SPI managers establish and compare culture profiles and decide how to address identified problems. To that end the text analysis technique is offered as a web service that allows for analysis of all text-based process models and standards, and of internal process documentation. Originality...

  19. Chemical and environmental vector control as a contribution to the elimination of visceral leishmaniasis on the Indian subcontinent: cluster randomized controlled trials in Bangladesh, India and Nepal

    Directory of Open Access Journals (Sweden)

    Das Pradeep

    2009-10-01

    Full Text Available Abstract Background Bangladesh, India and Nepal are working towards the elimination of visceral leishmaniasis (VL by 2015. In 2005 the World Health Organization/Training in Tropical Diseases launched an implementation research programme to support integrated vector management for the elimination of VL from Bangladesh, India and Nepal. The programme is conducted in different phases, from proof-of-concept to scaling up intervention. This study was designed in order to evaluate the efficacy of the three different interventions for VL vector management: indoor residual spraying (IRS; long-lasting insecticide treated nets (LLIN; and environmental modification (EVM through plastering of walls with lime or mud. Methods Using a cluster randomized controlled trial we compared three vector control interventions with a control arm in 96 clusters (hamlets or neighbourhoods in each of the 4 study sites: Bangladesh (one, India (one and Nepal (two. In each site four villages with high reported VL incidences were included. In each village six clusters and in each cluster five households were randomly selected for sand fly collection on two consecutive nights. Control and intervention clusters were matched with average pre-intervention vector densities. In each site six clusters were randomly assigned to each of the following interventions: indoor residual spraying (IRS; long-lasting insecticide treated nets (LLIN; environmental management (EVM or control. All the houses (50-100 in each intervention cluster underwent the intervention measures. A reduction of intra-domestic sand fly densities measured in the study households by overnight US Centres for Disease Prevention and Control light trap captures (that is the number of sand flies per trap per night was the main outcome measure. Results IRS, and to a lesser extent EVM and LLINs, significantly reduced sand fly densities for at least 5 months in the study households irrespective of type of walls or whether or

  20. Perturbation Solutions for Random Linear Structural Systems subject to Random Excitation using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Köyluoglu, H.U.; Nielsen, Søren R.K.; Cakmak, A.S.

    1994-01-01

    perturbation method using stochastic differential equations. The joint statistical moments entering the perturbation solution are determined by considering an augmented dynamic system with state variables made up of the displacement and velocity vector and their first and second derivatives with respect......The paper deals with the first and second order statistical moments of the response of linear systems with random parameters subject to random excitation modelled as white-noise multiplied by an envelope function with random parameters. The method of analysis is basically a second order...... to the random parameters of the problem. Equations for partial derivatives are obtained from the partial differentiation of the equations of motion. The zero time-lag joint statistical moment equations for the augmented state vector are derived from the Itô differential formula. General formulation is given...

  1. Central limit theorem for the Banach-valued weakly dependent random variables

    International Nuclear Information System (INIS)

    Dmitrovskij, V.A.; Ermakov, S.V.; Ostrovskij, E.I.

    1983-01-01

    The central limit theorem (CLT) for the Banach-valued weakly dependent random variables is proved. In proving CLT convergence of finite-measured (i.e. cylindrical) distributions is established. A weak compactness of the family of measures generated by a certain sequence is confirmed. The continuity of the limiting field is checked

  2. Engineering BioBrick vectors from BioBrick parts

    Directory of Open Access Journals (Sweden)

    Knight Thomas F

    2008-04-01

    Full Text Available Abstract Background The underlying goal of synthetic biology is to make the process of engineering biological systems easier. Recent work has focused on defining and developing standard biological parts. The technical standard that has gained the most traction in the synthetic biology community is the BioBrick standard for physical composition of genetic parts. Parts that conform to the BioBrick assembly standard are BioBrick standard biological parts. To date, over 2,000 BioBrick parts have been contributed to, and are available from, the Registry of Standard Biological Parts. Results Here we extended the same advantages of BioBrick standard biological parts to the plasmid-based vectors that are used to provide and propagate BioBrick parts. We developed a process for engineering BioBrick vectors from BioBrick parts. We designed a new set of BioBrick parts that encode many useful vector functions. We combined the new parts to make a BioBrick base vector that facilitates BioBrick vector construction. We demonstrated the utility of the process by constructing seven new BioBrick vectors. We also successfully used the resulting vectors to assemble and propagate other BioBrick standard biological parts. Conclusion We extended the principles of part reuse and standardization to BioBrick vectors. As a result, myriad new BioBrick vectors can be readily produced from all existing and newly designed BioBrick parts. We invite the synthetic biology community to (1 use the process to make and share new BioBrick vectors; (2 expand the current collection of BioBrick vector parts; and (3 characterize and improve the available collection of BioBrick vector parts.

  3. Viral Hybrid Vectors for Somatic Integration - Are They the Better Solution?

    Directory of Open Access Journals (Sweden)

    Anja Ehrhardt

    2009-12-01

    Full Text Available The turbulent history of clinical trials in viral gene therapy has taught us important lessons about vector design and safety issues. Much effort was spent on analyzing genotoxicity after somatic integration of therapeutic DNA into the host genome. Based on these findings major improvements in vector design including the development of viral hybrid vectors for somatic integration have been achieved. This review provides a state-of-the-art overview of available hybrid vectors utilizing viruses for high transduction efficiencies in concert with various integration machineries for random and targeted integration patterns. It discusses advantages but also limitations of each vector system.

  4. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    Science.gov (United States)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  5. Multiscale vector fields for image pattern recognition

    Science.gov (United States)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  6. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  7. Probability, random processes, and ergodic properties

    CERN Document Server

    Gray, Robert M

    1988-01-01

    This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...

  8. Learning with Uncertainty - Gaussian Processes and Relevance Vector Machines

    DEFF Research Database (Denmark)

    Candela, Joaquin Quinonero

    2004-01-01

    This thesis is concerned with Gaussian Processes (GPs) and Relevance Vector Machines (RVMs), both of which are particular instances of probabilistic linear models. We look at both models from a Bayesian perspective, and are forced to adopt an approximate Bayesian treatment to learning for two...... reasons. The first reason is the analytical intractability of the full Bayesian treatment and the fact that we in principle do not want to resort to sampling methods. The second reason, which incidentally justifies our not wanting to sample, is that we are interested in computationally efficient models...... approaches that ignore the accumulated uncertainty are way overconfident. Finally we explore a much harder problem: that of training with uncertain inputs. We explore approximating the full Bayesian treatment, which implies an analytically intractable integral. We propose two preliminary approaches...

  9. Application of Bred Vectors To Data Assimilation

    Science.gov (United States)

    Corazza, M.; Kalnay, E.; Patil, Dj

    We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0

  10. Gateway-assisted vector construction to facilitate expression of foreign proteins in the chloroplast of single celled algae.

    Directory of Open Access Journals (Sweden)

    Melanie Oey

    Full Text Available With a rising world population, demand will increase for food, energy and high value products. Renewable production systems, including photosynthetic microalgal biotechnologies, can produce biomass for foods, fuels and chemical feedstocks and in parallel allow the production of high value protein products, including recombinant proteins. Such high value recombinant proteins offer important economic benefits during startup of industrial scale algal biomass and biofuel production systems, but the limited markets for individual recombinant proteins will require a high throughput pipeline for cloning and expression in microalgae, which is currently lacking, since genetic engineering of microalgae is currently complex and laborious. We have introduced the recombination based Gateway® system into the construction process of chloroplast transformation vectors for microalgae. This simplifies the vector construction and allows easy, fast and flexible vector design for the high efficiency protein production in microalgae, a key step in developing such expression pipelines.

  11. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  12. Probability on graphs random processes on graphs and lattices

    CERN Document Server

    Grimmett, Geoffrey

    2018-01-01

    This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. This new edition features accounts of major recent progress, including the exact value of the connective constant of the hexagonal lattice, and the critical point of the random-cluster model on the square lattice. The choice of topics is strongly motivated by modern applications, and focuses on areas that merit further research. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.

  13. Vector Monte Carlo simulations on atmospheric scattering of polarization qubits.

    Science.gov (United States)

    Li, Ming; Lu, Pengfei; Yu, Zhongyuan; Yan, Lei; Chen, Zhihui; Yang, Chuanghua; Luo, Xiao

    2013-03-01

    In this paper, a vector Monte Carlo (MC) method is proposed to study the influence of atmospheric scattering on polarization qubits for satellite-based quantum communication. The vector MC method utilizes a transmittance method to solve the photon free path for an inhomogeneous atmosphere and random number sampling to determine whether the type of scattering is aerosol scattering or molecule scattering. Simulations are performed for downlink and uplink. The degrees and the rotations of polarization are qualitatively and quantitatively obtained, which agree well with the measured results in the previous experiments. The results show that polarization qubits are well preserved in the downlink and uplink, while the number of received single photons is less than half of the total transmitted single photons for both links. Moreover, our vector MC method can be applied for the scattering of polarized light in other inhomogeneous random media.

  14. Integrating support vector machines and random forests to classify crops in time series of Worldview-2 images

    Science.gov (United States)

    Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.

    2017-10-01

    Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.

  15. Generalized Selection Weighted Vector Filters

    Directory of Open Access Journals (Sweden)

    Rastislav Lukac

    2004-09-01

    Full Text Available This paper introduces a class of nonlinear multichannel filters capable of removing impulsive noise in color images. The here-proposed generalized selection weighted vector filter class constitutes a powerful filtering framework for multichannel signal processing. Previously defined multichannel filters such as vector median filter, basic vector directional filter, directional-distance filter, weighted vector median filters, and weighted vector directional filters are treated from a global viewpoint using the proposed framework. Robust order-statistic concepts and increased degree of freedom in filter design make the proposed method attractive for a variety of applications. Introduced multichannel sigmoidal adaptation of the filter parameters and its modifications allow to accommodate the filter parameters to varying signal and noise statistics. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise. This paper is an extended version of the paper by Lukac et al. presented at the 2003 IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03 in Grado, Italy.

  16. Convolution of Distribution-Valued Functions. Applications.

    OpenAIRE

    BARGETZ, CHRISTIAN

    2011-01-01

    In this article we examine products and convolutions of vector-valued functions. For nuclear normal spaces of distributions Proposition 25 in [31,p. 120] yields a vector-valued product or convolution if there is a continuous product or convolution mapping in the range of the vector-valued functions. For specific spaces, we generalize this result to hypocontinuous bilinear maps at the expense of generality with respect to the function space. We consider holomorphic, meromorphic and differentia...

  17. Extreme-value limit of the convolution of exponential and multivariate normal distributions: Link to the Hüsler–Reiß distribution

    KAUST Repository

    Krupskii, Pavel

    2017-11-02

    The multivariate Hüsler–Reiß copula is obtained as a direct extreme-value limit from the convolution of a multivariate normal random vector and an exponential random variable multiplied by a vector of constants. It is shown how the set of Hüsler–Reiß parameters can be mapped to the parameters of this convolution model. Assuming there are no singular components in the Hüsler–Reiß copula, the convolution model leads to exact and approximate simulation methods. An application of simulation is to check if the Hüsler–Reiß copula with different parsimonious dependence structures provides adequate fit to some data consisting of multivariate extremes.

  18. Extreme-value limit of the convolution of exponential and multivariate normal distributions: Link to the Hüsler–Reiß distribution

    KAUST Repository

    Krupskii, Pavel; Joe, Harry; Lee, David; Genton, Marc G.

    2017-01-01

    The multivariate Hüsler–Reiß copula is obtained as a direct extreme-value limit from the convolution of a multivariate normal random vector and an exponential random variable multiplied by a vector of constants. It is shown how the set of Hüsler–Reiß parameters can be mapped to the parameters of this convolution model. Assuming there are no singular components in the Hüsler–Reiß copula, the convolution model leads to exact and approximate simulation methods. An application of simulation is to check if the Hüsler–Reiß copula with different parsimonious dependence structures provides adequate fit to some data consisting of multivariate extremes.

  19. Duality in vector optimization

    CERN Document Server

    Bot, Radu Ioan

    2009-01-01

    This book presents fundamentals and comprehensive results regarding duality for scalar, vector and set-valued optimization problems in a general setting. After a preliminary chapter dedicated to convex analysis and minimality notions of sets with respect to partial orderings induced by convex cones a chapter on scalar conjugate duality follows. Then investigations on vector duality based on scalar conjugacy are made. Weak, strong and converse duality statements are delivered and connections to classical results from the literature are emphasized. One chapter is exclusively consecrated to the s

  20. Multi-perspective views of students’ difficulties with one-dimensional vector and two-dimensional vector

    Science.gov (United States)

    Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno

    2017-01-01

    Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.

  1. Renewal theory for perturbed random walks and similar processes

    CERN Document Server

    Iksanov, Alexander

    2016-01-01

    This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...

  2. On the vector meson dominate hypothesis and the RA = σLA/σTA value in electron-nucleus deep inelastic scattering

    International Nuclear Information System (INIS)

    Peng Hongan; Liu Lianshou

    1986-01-01

    It is argued that the longitudinal part of space-like photon in its Breit frame is unable to transform into vector meson. Starting from this argument and adding a small amount of diquark component into the nucleon structure functions in nuclei, the A dependence of the R A = σ L A /σ T A value observed in electron nucleus DIS by the SLAC Group is explained

  3. Melnikov processes and chaos in randomly perturbed dynamical systems

    Science.gov (United States)

    Yagasaki, Kazuyuki

    2018-07-01

    We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.

  4. Self-consistent descriptions of vector mesons in hot matter reexamined

    International Nuclear Information System (INIS)

    Riek, Felix; Knoll, Joern

    2010-01-01

    Technical concepts are presented that improve the self-consistent treatment of vector mesons in a hot and dense medium. First applications concern an interacting gas of pions and ρ mesons. As an extension of earlier studies, we thereby include random-phase-approximation-type vertex corrections and further use dispersion relations to calculate the real part of the vector-meson self-energy. An improved projection method preserves the four transversality of the vector-meson polarization tensor throughout the self-consistent calculations, thereby keeping the scheme void of kinematical singularities.

  5. A Classification Detection Algorithm Based on Joint Entropy Vector against Application-Layer DDoS Attack

    Directory of Open Access Journals (Sweden)

    Yuntao Zhao

    2018-01-01

    Full Text Available The application-layer distributed denial of service (AL-DDoS attack makes a great threat against cyberspace security. The attack detection is an important part of the security protection, which provides effective support for defense system through the rapid and accurate identification of attacks. According to the attacker’s different URL of the Web service, the AL-DDoS attack is divided into three categories, including a random URL attack and a fixed and a traverse one. In order to realize identification of attacks, a mapping matrix of the joint entropy vector is constructed. By defining and computing the value of EUPI and jEIPU, a visual coordinate discrimination diagram of entropy vector is proposed, which also realizes data dimension reduction from N to two. In terms of boundary discrimination and the region where the entropy vectors fall in, the class of AL-DDoS attack can be distinguished. Through the study of training data set and classification, the results show that the novel algorithm can effectively distinguish the web server DDoS attack from normal burst traffic.

  6. Polarization speckles and generalized Stokes vector wave: a review [invited

    DEFF Research Database (Denmark)

    Takeda, Mitsuo; Wang, Wei; Hanson, Steen Grüner

    2010-01-01

    We review some of the statistical properties of polarization-related speckle phenomena, with an introduction of a less known concept of polarization speckles and their spatial degree of polarization. As a useful means to characterize twopoint vector field correlations, we review the generalized...... Stokes parameters proposed by Korotkova and Wolf, and introduce its time-domain representation to describe the space-time evolution of the correlation between random electric vector fields at two different space-time points. This time-domain generalized Stokes vector, with components similar to those...... of the beam coherence polarization matrix proposed by Gori, is shown to obey the wave equation in exact analogy to a coherence function of scalar fields. Because of this wave nature, the time-domain generalized Stokes vector is referred to as generalized Stokes vector wave in this paper....

  7. Estimating normal mixture parameters from the distribution of a reduced feature vector

    Science.gov (United States)

    Guseman, L. F.; Peters, B. C., Jr.; Swasdee, M.

    1976-01-01

    A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.

  8. Value Creation by Process-Oriented Project Management

    NARCIS (Netherlands)

    Geijtenbeek, W.; Eekelen, van A.L.M.; Kleine, A.J.; Favie, R.; Maas, G.J.; Milford, R.

    2007-01-01

    The start of a design process based on value creation requires a different approach and new models. The aim of this study is to provide insight into how a design process based on value creation can be initiated. The intended result of the study is the design of the of a collaboration model that can

  9. Money creation process in a random redistribution model

    Science.gov (United States)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  10. Application of support vector machine model for enhancing the diagnostic value of tumor markers in gastric cancer

    International Nuclear Information System (INIS)

    Wang Hui; Huang Gang

    2010-01-01

    Objective: To evaluate the early diagnostic value of tumor markers for gastric cancer using support vector machine (SVM) model. Methods: Subjects involved in the study consisted of 262 cases with gastric cancer, 156 cases with benign gastric diseases and 149 healthy controls. From those subjects, five tumor markers, carcinoembryonic antigen (CEA), carbohydrate (CA) 125, CA19-9, alphafetoprotein (AFP) and CA50, were assayed and collected to make the datasets. To modify SVM model to fit the diagnostic classifiers, radial basis function was adopted and kernel function was optimized and validated by grid search and cross validation. For comparative study, methods of combination tests of five markers, Logistic regression, and decision tree were also used. Results: For gastric cancer, the diagnostic accuracy of the combination tests, Logistic regression, decision tree and SVM model were 46.2%, 64.5%, 63.9% and 95.1% respectively. SVM model significantly elevated the diagnostic value comparing with other three methods. Conclusion: The application of SVM model is of high value in enhancing the tumor marker for the diagnosis of gastric cancer. (authors)

  11. Insights from random vibration analyses using multiple earthquake components

    International Nuclear Information System (INIS)

    DebChaudhury, A.; Gasparini, D.A.

    1981-01-01

    The behavior of multi-degree-of-freedom systems subjected to multiple earthquake components is studied by the use of random vibration dynamic analyses. A linear system which has been decoupled into modes and has both translational and rotational degrees of freedom is analyzed. The seismic excitation is modelled as a correlated or uncorrelated, vector-valued, non-stationary random process having a Kanai-Tajimi type of frequency content. Non-stationarity is achieved by using a piece wise linear strength function. Therefore, almost any type of evolution and decay of an earthquake may be modelled. Also, in general, the components of the excitation have different frequency contents and strength functions; i.e. intensities and durations and the correlations between components can vary with time. A state-space, modal, random vibration approach is used. Exact analytical expressions for both the state transition matrix and the evolutionary modal covariance matrix are utilized to compute time histories of modal RMS responses. Desired responses are then computed by modal superposition. Specifically, relative displacement, relative velocity and absolute acceleration responses are studied. An important advantage of such analyses is that RMS responses vary smoothly in time therefore large time intervals may be used to generate response time histories. The modal superposition is exact; that is, all cross correlation terms between modal responses are included. (orig./RW)

  12. Statistics of light deflection in a random two-phase medium

    International Nuclear Information System (INIS)

    Sviridov, A P

    2007-01-01

    The statistics of the angles of light deflection during its propagation in a random two-phase medium with randomly oriented phase interfaces is considered within the framework of geometrical optics. The probabilities of finding a randomly walking photon in different phases of the inhomogeneous medium are calculated. Analytic expressions are obtained for the scattering phase function and the scattering phase matrix which relates the Stokes vector of the incident light beam with the Stokes vectors of deflected beams. (special issue devoted to multiple radiation scattering in random media)

  13. Competence-Based Approach in Value Chain Processes

    Science.gov (United States)

    Azevedo, Rodrigo Cambiaghi; D'Amours, Sophie; Rönnqvist, Mikael

    There is a gap between competence theory and value chain processes frameworks. While individually considered as core elements in contemporary management thinking, the integration of the two concepts is still lacking. We claim that this integration would allow for the development of more robust business models by structuring value chain activities around aspects such as capabilities and skills, as well as individual and organizational knowledge. In this context, the objective of this article is to reduce this gap and consequently open a field for further improvements of value chain processes frameworks.

  14. Vector Boson Scattering at High Mass

    CERN Document Server

    Sherwood, P

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate W W scalar and vector resonances, W Z vector resonances and a Z Z scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons.

  15. Analyzing the Genotoxicity of Retroviral Vectors in Hematopoietic Cell Gene Therapy

    Directory of Open Access Journals (Sweden)

    Luca Biasco

    2018-03-01

    Full Text Available Retroviral vectors, including those derived from gammaretroviruses and lentiviruses, have found their way into the clinical arena and demonstrated remarkable efficacy for the treatment of immunodeficiencies, leukodystrophies, and globinopathies. Despite these successes, gene therapy unfortunately also has had to face severe adverse events in the form of leukemias and myelodysplastic syndromes, related to the semi-random vector integration into the host cell genome that caused deregulation of neighboring proto-oncogenes. Although improvements in vector design clearly lowered the risk of this insertional mutagenesis, analysis of potential genotoxicity and the consequences of vector integration remain important parameters for basic and translational research and most importantly for the clinic. Here, we review current assays to analyze biodistribution and genotoxicity in the pre-clinical setting and describe tools to monitor vector integration sites in vector-treated patients as a biosafety readout.

  16. Vector meson dominance and pointlike coupling of the photon in soft and hard processes

    International Nuclear Information System (INIS)

    Paul, E.

    1990-05-01

    Recent experimental results on photoproduction of hadrons probe the nature of the interacting photon over a wide kinematical range from soft to hard processes. Single inclusive spectra and energy flows of the final state charged particles are well described by assuming that photon production data are built up by an incoherent superposition of a soft Vector-Meson-Dominance component and a hard pointlike photon component. (orig.)

  17. Ability of herpes simplex virus vectors to boost immune responses to DNA vectors and to protect against challenge by simian immunodeficiency virus

    International Nuclear Information System (INIS)

    Kaur, Amitinder; Sanford, Hannah B.; Garry, Deirdre; Lang, Sabine; Klumpp, Sherry A.; Watanabe, Daisuke; Bronson, Roderick T.; Lifson, Jeffrey D.; Rosati, Margherita; Pavlakis, George N.; Felber, Barbara K.; Knipe, David M.; Desrosiers, Ronald C.

    2007-01-01

    The immunogenicity and protective capacity of replication-defective herpes simplex virus (HSV) vector-based vaccines were examined in rhesus macaques. Three macaques were inoculated with recombinant HSV vectors expressing Gag, Env, and a Tat-Rev-Nef fusion protein of simian immunodeficiency virus (SIV). Three other macaques were primed with recombinant DNA vectors expressing Gag, Env, and a Pol-Tat-Nef-Vif fusion protein prior to boosting with the HSV vectors. Robust anti-Gag and anti-Env cellular responses were detected in all six macaques. Following intravenous challenge with wild-type, cloned SIV239, peak and 12-week plasma viremia levels were significantly lower in vaccinated compared to control macaques. Plasma SIV RNA in vaccinated macaques was inversely correlated with anti-Rev ELISPOT responses on the day of challenge (P value < 0.05), anti-Tat ELISPOT responses at 2 weeks post challenge (P value < 0.05) and peak neutralizing antibody titers pre-challenge (P value 0.06). These findings support continued study of recombinant herpesviruses as a vaccine approach for AIDS

  18. The charge form factor of the neutron from sup 2 H-vector, (e-vector, e' n)p

    CERN Document Server

    Passchier, I; Szczerba, D; Alarcon, R; Bauer, T S; Boersma, D J; Van der Brand, J F J; Bulten, H J; Ferro-Luzzi, M; Higinbotham, D W; Jager, C W D; Klous, S; Kolster, H; Lang, J; Nikolenko, D M; Nooren, G J; Norum, B E; Poolman, H R; Rachek, Igor A; Simani, M C; Six, E; Vries, H D; Wang, K; Zhou, Z L

    2000-01-01

    We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter A sup V sub e sub d was measured for the reaction sup 2 H-vector, (e-vector, e'n)p at a four-momentum transfer squared of 0.21 (GeV/c) sup 2 from which a value for the charge form factor of the neutron was extracted.

  19. Effective axial-vector strength and β-decay systematics

    Science.gov (United States)

    Delion, D. S.; Suhonen, J.

    2014-09-01

    We use the weak axial-vector coupling strength g_{\\text{A}} as a key parameter to reproduce simultaneously the available data for both the Gamow-Teller \\beta^- and \\beta^+/\\text{EC} decay rates in nine triplets of isobars with mass numbers A=70,78,100,104,106,110,116,128,130 . We use the proton-neutron quasiparticle random-phase approximation (pnQRPA) with schematic dipole interaction containing particle-particle and particle-hole parts with mass-dependent strengths. Our analysis points to a strongly quenched effective value g_{\\text{A}}\\approx 0.3 , with a relative error of 28%. We then perform a systematic computation of 218 experimentally known \\beta^- and \\beta^+/\\text{EC} decays with quite a remarkable success. The presently extracted value of g_{\\text{A}} should be taken as an effective one, specific for a given nuclear theory framework. Present studies suggest that the effective g_{\\text{A}} is suitable for the description of decay transitions to 1^+ states at moderate excitation, below the Gamow-Teller giant resonance region.

  20. Lifetime value in business process

    Directory of Open Access Journals (Sweden)

    Martin Souček

    2011-01-01

    Full Text Available The paper focuses on lifetime value assessment and its implementation and application in business processes. The lifetime value is closely connected to customer relationship management. The paper presents results of three consecutive researches devoted to issues of customer relationship management. The first two from 2008 and 2010 were conducted as quantitative ones; the one from 2009 had qualitative nature. The respondents were representatives of particular companies. The means for data collection was provided by ReLa system. We will focus on individual attributes of lifetime value of a customer, and relate them to approaches of authors mentioned in introduction. Based on the qualitative research data, the paper focuses on individual customer lifetime value parameters. These parameters include: the cost to the customer relationship acquisition and maintenance, profit generated from a particular customer, customer awareness value, the level of preparedness to adopt new products, the value of references and customer loyalty level. For each of these parameters, the paper provides specific recommendations. Moreover, it is possible to learn about the nature of these parameter assessments in the Czech environment.

  1. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  2. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  3. The limit of small Rossby numbers for randomly forced quasi-geostrophic equation on $\\beta$-plane

    OpenAIRE

    Kuksin, Sergei; Maiocchi, Alberto

    2014-01-01

    We consider the 2d quasigeostrophic equation on the $\\beta$-plane for the stream function $\\psi$, with dissipation and a random force: $$ (*)\\qquad (-\\Delta +K)\\psi_t - \\rho J(\\psi, \\Delta\\psi) -\\beta\\psi_x= \\langle \\text{random force}\\rangle -\\kappa\\Delta^2\\psi +\\Delta\\psi, $$ where $\\psi=\\psi(t,x,y), \\ x\\in\\mathbb{R}/2\\pi L\\mathbb{Z}, \\ y\\in \\mathbb{R}/2\\pi \\mathbb{Z}$. For typical values of the horizontal period $L$ we prove that the law of the action-vector of a solution for $(*)$ (formed...

  4. Identification of species based on DNA barcode using k-mer feature vector and Random forest classifier.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R

    2016-11-05

    DNA barcoding is a molecular diagnostic method that allows automated and accurate identification of species based on a short and standardized fragment of DNA. To this end, an attempt has been made in this study to develop a computational approach for identifying the species by comparing its barcode with the barcode sequence of known species present in the reference library. Each barcode sequence was first mapped onto a numeric feature vector based on k-mer frequencies and then Random forest methodology was employed on the transformed dataset for species identification. The proposed approach outperformed similarity-based, tree-based, diagnostic-based approaches and found comparable with existing supervised learning based approaches in terms of species identification success rate, while compared using real and simulated datasets. Based on the proposed approach, an online web interface SPIDBAR has also been developed and made freely available at http://cabgrid.res.in:8080/spidbar/ for species identification by the taxonomists. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    International Nuclear Information System (INIS)

    Fyodorov, Yan V; Bouchaud, Jean-Philippe

    2008-01-01

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  6. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    Energy Technology Data Exchange (ETDEWEB)

    Fyodorov, Yan V [School of Mathematical Sciences, University of Nottingham, Nottingham NG72RD (United Kingdom); Bouchaud, Jean-Philippe [Science and Finance, Capital Fund Management 6-8 Bd Haussmann, 75009 Paris (France)

    2008-09-19

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  7. A Kalman Filter for SINS Self-Alignment Based on Vector Observation.

    Science.gov (United States)

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-29

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.

  8. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  9. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  10. Iterated Process Analysis over Lattice-Valued Regular Expressions

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work e...... extends traditional semantics-based program analysis techniques to automatically reason about message passing in a manner that can simultaneously analyze both values of variables as well as message order, message content, and their interdependencies.......We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work...

  11. Multifractal detrended fluctuation analysis of analog random multiplicative processes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)

    2009-09-15

    We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.

  12. A Framework for Diagnosing the Out-of-Control Signals in Multivariate Process Using Optimized Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Tai-fu Li

    2013-01-01

    Full Text Available Multivariate statistical process control is the continuation and development of unitary statistical process control. Most multivariate statistical quality control charts are usually used (in manufacturing and service industries to determine whether a process is performing as intended or if there are some unnatural causes of variation upon an overall statistics. Once the control chart detects out-of-control signals, one difficulty encountered with multivariate control charts is the interpretation of an out-of-control signal. That is, we have to determine whether one or more or a combination of variables is responsible for the abnormal signal. A novel approach for diagnosing the out-of-control signals in the multivariate process is described in this paper. The proposed methodology uses the optimized support vector machines (support vector machine classification based on genetic algorithm to recognize set of subclasses of multivariate abnormal patters, identify the responsible variable(s on the occurrence of abnormal pattern. Multiple sets of experiments are used to verify this model. The performance of the proposed approach demonstrates that this model can accurately classify the source(s of out-of-control signal and even outperforms the conventional multivariate control scheme.

  13. Fourth meeting entitled “Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data”

    CERN Document Server

    Vilanova, Anna; Burgeth, Bernhard; Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data

    2014-01-01

    Arising from the fourth Dagstuhl conference entitled Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data (2011), this book offers a broad and vivid view of current work in this emerging field. Topics covered range from applications of the analysis of tensor fields to research on their mathematical and analytical properties. Part I, Tensor Data Visualization, surveys techniques for visualization of tensors and tensor fields in engineering, discusses the current state of the art and challenges, and examines tensor invariants and glyph design, including an overview of common glyphs. The second Part, Representation and Processing of Higher-order Descriptors, describes a matrix representation of local phase, outlines mathematical morphological operations techniques, extended for use in vector images, and generalizes erosion to the space of diffusion weighted MRI. Part III, Higher Order Tensors and Riemannian-Finsler Geometry, offers powerful mathematical language to model and...

  14. Lithium-ion battery remaining useful life prediction based on grey support vector machines

    Directory of Open Access Journals (Sweden)

    Xiaogang Li

    2015-12-01

    Full Text Available In this article, an improved grey prediction model is proposed to address low-accuracy prediction issue of grey forecasting model. The first step is using a trigonometric function to transform the original data sequence to smooth the data, which is called smoothness of grey prediction model, and then a grey support vector machine model by integrating the improved grey model with support vector machine is introduced. At the initial stage of the model, trigonometric functions and accumulation generation operation can be used to preprocess the data, which enhances the smoothness of the data and reduces the associated randomness. In addition, support vector machine is implemented to establish a prediction model for the pre-processed data and select the optimal model parameters via genetic algorithms. Finally, the data are restored through the ‘regressive generate’ operation to obtain the forecasting data. To prove that the grey support vector machine model is superior to the other models, the battery life data from the Center for Advanced Life Cycle Engineering are selected, and the presented model is used to predict the remaining useful life of the battery. The predicted result is compared to that of grey model and support vector machines. For a more intuitive comparison of the three models, this article quantifies the root mean square errors for these three different models in the case of different ratio of training samples and prediction samples. The results show that the effect of grey support vector machine model is optimal, and the corresponding root mean square error is only 3.18%.

  15. Knowledge in Value Creation Process for Increasing Competitive Advantage

    Directory of Open Access Journals (Sweden)

    Anna ZÁVODSKÁ

    2012-12-01

    Full Text Available The aim of this paper is to compare companies by using value creation model and to determine knowledge in these processes. The framework for the value creation process shows problems of case companies in different phases of this process. Knowledge is compared in each of the individual phases of the process and its role in different types of companies. There is identified role of knowledge for increasing competitive advantage. The methodology involves case study from which data are derived and analyzed. The analysis shows that the framework for the value creation process can be used as an analytical tool for value overview in different phases and there is a need for different approaches to improve business and create new value for customers. Based on the analyzed problems, proposed recommendations for improvement are made. These recommendations are based on providing value innovation for customers (end users of software product. Value innovation of software product is considered as crucial for improvement of the companies in machinery industry. Company A has created new value through remote service. This remote service provides several advantages. Customers can prevent problems in machines by implementing software product which is still analyzing and evaluating data from machines. Company B and C were not able to create major value innovation for several years.

  16. Knowledge in Value Creation Process for Increasing Competitive Advantage

    Directory of Open Access Journals (Sweden)

    Veronika ŠRAMOVÁ

    2013-07-01

    Full Text Available The aim of this paper is to compare companies by using value creation model and to determine knowledge in these processes. The framework for the value creation process shows problems of case companies in different phases of this process. Knowledge is compared in each of the individual phases of the process and its role in different types of companies. There is identified role of knowledge for increasing competitive advantage. The methodology involves case study from which data are derived and analyzed. The analysis shows that the framework for the value creation process can be used as an analytical tool for value overview in different phases and there is a need for different approaches to improve business and create new value for customers. Based on the analyzed problems, proposed recommendations for improvement are made. These recommendations are based on providing value innovation for customers (end users of software product. Value innovation of software product is considered as crucial for improvement of the companies in machinery industry. Company A has created new value through remote service. This remote service provides several advantages. Customers can prevent problems in machines by implementing software product which is still analyzing and evaluating data from machines. Company B and C were not able to create major value innovation for several years.

  17. Classification of subsurface objects using singular values derived from signal frames

    Science.gov (United States)

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  18. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    Science.gov (United States)

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  19. Orthogonalisation of Vectors

    Indian Academy of Sciences (India)

    The Gram-Schmidt process is one of the first things one learns in a course ... We might want to stay as close to the experimental data as possible when converting these vectors to orthonormal ones demanded by the model. The process of finding the closest or- thonormal .... is obtained by writing the matrix A = [aI, an], then.

  20. Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes.

    Science.gov (United States)

    Wang, Yuanjia; Chen, Tianle; Zeng, Donglin

    2016-01-01

    Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.

  1. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  2. Combination of support vector machine, artificial neural network and random forest for improving the classification of convective and stratiform rain using spectral features of SEVIRI data

    Science.gov (United States)

    Lazri, Mourad; Ameur, Soltane

    2018-05-01

    A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.

  3. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  4. The optical analogy for vector fields

    Science.gov (United States)

    Parker, E. N. (Editor)

    1991-01-01

    This paper develops the optical analogy for a general vector field. The optical analogy allows the examination of certain aspects of a vector field that are not otherwise readily accessible. In particular, in the cases of a stationary Eulerian flow v of an ideal fluid and a magnetostatic field B, the vectors v and B have surface loci in common with their curls. The intrinsic discontinuities around local maxima in absolute values of v and B take the form of vortex sheets and current sheets, respectively, the former playing a fundamental role in the development of hydrodyamic turbulence and the latter playing a major role in heating the X-ray coronas of stars and galaxies.

  5. Provable quantum advantage in randomness processing

    OpenAIRE

    Dale, H; Jennings, D; Rudolph, T

    2015-01-01

    Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics actually exactly coincides with the class computable quantum-mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of "randomness processing" in which quantum theory provably yields, not only resource reduction over c...

  6. Cosmological evolution in vector-tensor theories of gravity

    International Nuclear Information System (INIS)

    Beltran Jimenez, Jose; Maroto, Antonio L.

    2009-01-01

    We present a detailed study of the cosmological evolution in general vector-tensor theories of gravity without potential terms. We consider the evolution of the vector field throughout the expansion history of the Universe and carry out a classification of models according to the behavior of the vector field in each cosmological epoch. We also analyze the case in which the Universe is dominated by the vector field, performing a complete analysis of the system phase map and identifying those attracting solutions which give rise to accelerated expansion. Moreover, we consider the evolution in a universe filled with a pressureless fluid in addition to the vector field and study the existence of attractors in which we can have a transition from matter domination to vector domination with accelerated expansion so that the vector field may play the role of dark energy. We find that the existence of solutions with late-time accelerated expansion is a generic prediction of vector-tensor theories and that such solutions typically lead to the presence of future singularities. Finally, limits from local gravity tests are used to get constraints on the value of the vector field at small (Solar System) scales.

  7. Charmless Hadronic B Decays into Vector, Axial Vector and Tensor Final States at BaBar

    International Nuclear Information System (INIS)

    Gandini, Paolo

    2012-01-01

    We present experimental measurements of branching fraction and longitudinal polarization fraction in charmless hadronic B decays into vector, axial vector and tensor final states with the final dataset of BABAR. Measurements of such kind of decays are a powerful tool both to test the Standard Model and search possible sources of new physics. In this document we present a short review of the last experimental results at BABAR concerning charmless quasi two-body decays in final states containing particles with spin 1 or spin 2 and different parities. This kind of decays has received considerable theoretical interest in the last few years and this particular attention has led to interesting experimental results at the current b-factories. In fact, the study of longitudinal polarization fraction f L in charmless B decays to vector vector (VV), vector axial-vector (VA) and axial-vector axial-vector (AA) mesons provides information on the underlying helicity structure of the decay mechanism. Naive helicity conservation arguments predict a dominant longitudinal polarization fraction f L ∼ 1 for both tree and penguin dominated decays and this pattern seems to be confirmed by tree-dominated B → ρρ and B + → (Omega)ρ + decays. Other penguin dominated decays, instead, show a different behavior: the measured value of f L ∼ 0.5 in B → φK* decays is in contrast with naive Standard Model (SM) calculations. Several solutions have been proposed such as the introduction of non-factorizable terms and penguin-annihilation amplitudes, while other explanations invoke new physics. New modes have been investigated to shed more light on the problem.

  8. Image Vector Quantization codec indexes filtering

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay Abdelmounaim

    2012-01-01

    Full Text Available Vector Quantisation (VQ is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.

  9. Mutational analysis a joint framework for Cauchy problems in and beyond vector spaces

    CERN Document Server

    Lorenz, Thomas

    2010-01-01

    Ordinary differential equations play a central role in science and have been extended to evolution equations in Banach spaces. For many applications, however, it is difficult to specify a suitable normed vector space. Shapes without a priori restrictions, for example, do not have an obvious linear structure. This book generalizes ordinary differential equations beyond the borders of vector spaces with a focus on the well-posed Cauchy problem in finite time intervals. Here are some of the examples: - Feedback evolutions of compact subsets of the Euclidean space - Birth-and-growth processes of random sets (not necessarily convex) - Semilinear evolution equations - Nonlocal parabolic differential equations - Nonlinear transport equations for Radon measures - A structured population model - Stochastic differential equations with nonlocal sample dependence and how they can be coupled in systems immediately - due to the joint framework of Mutational Analysis. Finally, the book offers new tools for modelling.

  10. UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS

    Data.gov (United States)

    National Aeronautics and Space Administration — UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS AMY MCGOVERN, TIMOTHY SUPINIE, DAVID JOHN GAGNE II, NATHANIEL TROUTMAN,...

  11. Optimal redundant systems for works with random processing time

    International Nuclear Information System (INIS)

    Chen, M.; Nakagawa, T.

    2013-01-01

    This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems

  12. Accumulated damage evaluation for a piping system by the response factor on non-stationary random process, 2

    International Nuclear Information System (INIS)

    Shintani, Masanori

    1988-01-01

    This paper shows that the average and variance of the accumulated damage caused by earthquakes on the piping system attached to a building are related to the seismic response factor λ. The earthquakes refered to in this paper are of a non-stationary random process kind. The average is proportional to λ 2 and the variance to λ 4 . The analytical values of the average and variance for a single-degree-of-freedom system are compared with those obtained from computer simulations. Here the model of the building is a single-degree-of-freedom system. Both average of accumulated damage are approximately equal. The variance obtained from the analysis does not coincide with that from simulations. The reason is considered to be the forced vibraiton by sinusoidal waves, and the sinusoidal waves included random waves. Taking account of amplitude magnification factor, the values of the variance approach those obtained from simulations. (author)

  13. Perceived value creation process: focus on the company offer

    Directory of Open Access Journals (Sweden)

    Irena Pandža Bajs

    2012-12-01

    Full Text Available In the competitive business environment, as the number of rational consumers faced with many choices increases, companies can achieve their dominance best by applying the business concepts oriented to consumers in order to deliver a value which is different and better than that of their competitors. Among the various products on the market, an educated consumer chooses the offer that provides the greatest value for him/her. Therefore, it is essential for each company to determine how consumers perceive the value of its offer, and which factors determine the high level of perceived value for current and potential consumers. An analysis of these factors provides guidance on how to improve the existing offer and what the offer to be delivered in the future should be like. That could increase the perceived value of the company offer and result in a positive impact on consumer satisfaction and on establishing a stronger, longterm relationship with consumers. The process of defining the perceived value of a particular market offer is affected by the factors of the respective company’s offer as well as by competition factors, consumer factors and buying process factors. The aim of this paper is to analyze the relevant knowledge about the process of creating the perceived value of the company’s market offer and the factors that influence this process. The paper presents a conceptual model of the perceived value creation process in consumers’ mind.

  14. Vector-like quarks: t’ and partners

    International Nuclear Information System (INIS)

    PANIZZI, L.

    2014-01-01

    Vector-like quarks are predicted in various scenarios of new physics, and their peculiar signatures from both pair and single production have been already investigated in detail. However no signals of vector-like quarks have been detected so far, pushing limits on their masses above 600–700GeV, depending on assumptions on their couplings. Experimental searches consider specific final states to pose bounds on the mass of a vector-like quark, usually assuming it is the only particle that contributes to the signal of new physics in that specific final state. However, realistic scenarios predict the existence of multiple vector-like quarks, possibly with similar masses. The reinterpretation of mass bounds from experimental searches is therefore not always straightforward. In this analysis I briefly summarise the constraints on vector-like quarks and their possible signatures at the LHC, focusing in particular on a model-independent description of single production processes for vector-like quark that mix with all generations and on the development of a framework to study scenarios with multiple vector-like quarks.

  15. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  16. Vector bileptons and the decays h→γγ,Zγ

    Energy Technology Data Exchange (ETDEWEB)

    Yue, Chong-Xing, E-mail: cxyue@lnnu.edu.cn; Shi, Qiu-Yang; Hua, Tian

    2013-11-21

    Taking into account of the constraints on the relevant parameters from the muon anomalous magnetic moment, we consider the contributions of the vector bileptons V{sup ±} and U{sup ±±} predicted by the reduced minimal 331 model to the Higgs decay channels h→γγ and Zγ. Our numerical results show that the vector bileptons can enhance the partial width Γ(h→γγ), while reduce the partial width Γ(h→Zγ), which are anti-correlated. With reasonable values of the relevant free parameters, the vector bileptons can explain the LHC data for the γγ signal. If the CMS data persists, the values of the free parameters λ{sub 2} and λ{sub 3} should be severe constrained.

  17. Support Vector Machine and Application in Seizure Prediction

    KAUST Repository

    Qiu, Simeng

    2018-04-01

    Nowadays, Machine learning (ML) has been utilized in various kinds of area which across the range from engineering field to business area. In this paper, we first present several kernel machine learning methods of solving classification, regression and clustering problems. These have good performance but also have some limitations. We present examples to each method and analyze the advantages and disadvantages for solving different scenarios. Then we focus on one of the most popular classification methods, Support Vectors Machine (SVM). In addition, we introduce the basic theory, advantages and scenarios of using Support Vector Machine (SVM) deal with classification problems. We also explain a convenient approach of tacking SVM problems which are called Sequential Minimal Optimization (SMO). Moreover, one class SVM can be understood in a different way which is called Support Vector Data Description (SVDD). This is a famous non-linear model problem compared with SVM problems, SVDD can be solved by utilizing Gaussian RBF kernel function combined with SMO. At last, we compared the difference and performance of SVM-SMO implementation and SVM-SVDD implementation. About the application part, we utilized SVM method to handle seizure forecasting in canine epilepsy, after comparing the results from different methods such as random forest, extremely randomized tree, and SVM to classify preictal (pre-seizure) and interictal (interval-seizure) binary data. We draw the conclusion that SVM has the best performance.

  18. Applications of the conserved vector current theory and the partially conserved axial-vector current theory to nuclear beta-decays

    International Nuclear Information System (INIS)

    Tint, M.

    The contribution of the mesonic exchange effect to the conserved vector current in the first forbidden β-decay of Ra E is estimated under the headings: (1) The conserved vector current. (2) The CVC theory and the first forbidden β-decays. (3) Shell model calculations of some matrix-elements. (4) Direct calculation of the exchange term. Considering the mesonic exchange effect in the axial vector-current of β-decay the partially conserved axial vector current theory and experimental results of the process p + p → d + π + are examined. (U.K.)

  19. Secured Session-key Distribution using control Vector Encryption / Decryption Process

    International Nuclear Information System (INIS)

    Ismail Jabiullah, M.; Abdullah Al-Shamim; Khaleqdad Khan, ANM; Lutfar Rahman, M.

    2006-01-01

    Frequent key changes are very much desirable for the secret communications and are thus in high demand. A session-key distribution technique has been designed and implemented using the programming language C on which the communication between the end-users is encrypted is used for the duration of a logical connection. Each session-key is obtained from the key distribution center (KDC) over the same networking facilities used for end-user communication. The control vector is cryptographically coupled with the session-key at the time of key generation in the KDC. For this, the generated hash function, master key and the session-key are used for producing the encrypted session-key, which has to be transferred. All the operations have been performed using the C programming language. This process can be widely applicable to all sorts of electronic transactions online or offline; commercially and academically.(authors)

  20. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  1. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  2. Random walks in the quarter plane algebraic methods, boundary value problems, applications to queueing systems and analytic combinatorics

    CERN Document Server

    Fayolle, Guy; Malyshev, Vadim

    2017-01-01

    This monograph aims to promote original mathematical methods to determine the invariant measure of two-dimensional random walks in domains with boundaries. Such processes arise in numerous applications and are of interest in several areas of mathematical research, such as Stochastic Networks, Analytic Combinatorics, and Quantum Physics. This second edition consists of two parts. Part I is a revised upgrade of the first edition (1999), with additional recent results on the group of a random walk. The theoretical approach given therein has been developed by the authors since the early 1970s. By using Complex Function Theory, Boundary Value Problems, Riemann Surfaces, and Galois Theory, completely new methods are proposed for solving functional equations of two complex variables, which can also be applied to characterize the Transient Behavior of the walks, as well as to find explicit solutions to the one-dimensional Quantum Three-Body Problem, or to tackle a new class of Integrable Systems. Part II borrows spec...

  3. Probing the gluon density of the proton in the exclusive photoproduction of vector mesons at the LHC: a phenomenological analysis

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, V.P. [Lund University, Department of Astronomy and Theoretical Physics, Lund (Sweden); Universidade Federal de Pelotas, Instituto de Fisica e Matematica, Pelotas, RS (Brazil); Martins, L.A.S.; Sauter, W.K. [Universidade Federal de Pelotas, Instituto de Fisica e Matematica, Pelotas, RS (Brazil)

    2016-02-15

    The current uncertainty on the gluon density extracted from the global parton analysis is large in the kinematical range of small values of the Bjorken-x variable and low values of the hard scale Q{sup 2}. An alternative to reduces this uncertainty is the analysis of the exclusive vector meson photoproduction in photon-hadron and hadron-hadron collisions. This process offers a unique opportunity to constrain the gluon density of the proton, since its cross section is proportional to the gluon density squared. In this paper we consider current parametrisations for the gluon distribution and estimate the exclusive vector meson photoproduction cross section at HERA and LHC using the leading logarithmic formalism. We perform a fit of the normalisation of the γh cross section and the value of the hard scale for the process and demonstrate that the current LHCb experimental data are better described by models that assume a slow increasing of the gluon distribution at small x and low Q{sup 2}. (orig.)

  4. A locally convergent Jacobi iteration for the tensor singular value problem

    NARCIS (Netherlands)

    Shekhawat, Hanumant Singh; Weiland, Siep

    2018-01-01

    Multi-linear functionals or tensors are useful in study and analysis multi-dimensional signal and system. Tensor approximation, which has various applications in signal processing and system theory, can be achieved by generalizing the notion of singular values and singular vectors of matrices to

  5. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  6. Involutive distributions of operator-valued evolutionary vector fields and their affine geometry

    NARCIS (Netherlands)

    Kiselev, A.V.; van de Leur, J.W.

    2010-01-01

    We generalize the notion of a Lie algebroid over infinite jet bundle by replacing the variational anchor with an N-tuple of differential operators whose images in the Lie algebra of evolutionary vector fields of the jet space are subject to collective commutation closure. The linear space of such

  7. Off-diagonal helicity density matrix elements for vector mesons produced in polarized e+e- processes

    International Nuclear Information System (INIS)

    Anselmino, M.; Murgia, F.; Quintairos, P.

    1999-04-01

    Final state q q-bar interactions give origin to non zero values of the off-diagonal element ρ 1,-1 of the helicity density matrix of vector mesons produced in e + e - annihilations, as confirmed by recent OPAL data on φ, D * and K * 's. New predictions are given for ρ 1,-1 of several mesons produced at large x E and small p T - i.e. collinear with the parent jet - in the annihilation of polarized 3 + and 3 - , the results depend strongly on the elementary dynamics and allow further non trivial tests of the standard model. (author)

  8. Very-short-term wind power probabilistic forecasts by sparse vector autoregression

    DEFF Research Database (Denmark)

    Dowell, Jethro; Pinson, Pierre

    2016-01-01

    A spatio-temporal method for producing very-shortterm parametric probabilistic wind power forecasts at a large number of locations is presented. Smart grids containing tens, or hundreds, of wind generators require skilled very-short-term forecasts to operate effectively, and spatial information...... is highly desirable. In addition, probabilistic forecasts are widely regarded as necessary for optimal power system management as they quantify the uncertainty associated with point forecasts. Here we work within a parametric framework based on the logit-normal distribution and forecast its parameters....... The location parameter for multiple wind farms is modelled as a vector-valued spatiotemporal process, and the scale parameter is tracked by modified exponential smoothing. A state-of-the-art technique for fitting sparse vector autoregressive models is employed to model the location parameter and demonstrates...

  9. Apparent scale correlations in a random multifractal process

    DEFF Research Database (Denmark)

    Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin

    2008-01-01

    We discuss various properties of a homogeneous random multifractal process, which are related to the issue of scale correlations. By design, the process has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based on a coarse......-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several puzzling empirical details...

  10. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator.

    Science.gov (United States)

    Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T

    2015-01-01

    Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.

  11. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  12. Problems with Cash and Other Non-Operating Assets Value in the Process of Valuing Company

    Directory of Open Access Journals (Sweden)

    Piotr Szczepankowski

    2007-12-01

    Full Text Available In economic practice the process of valuing enterprises is based on potential earnings from companies operating assets ñ operating fixed assets and operating working capital. Cash and other non-operating assets (mainly financial are treated as unproductive, non-income assets. Eventually, in process of pricing their current, accounting value is added to income value of enterprise or cash is treated as source for quick covering the debts of firm, what of course indirectly improve for better value of equity (the lower financial risk. Not taking into account the profitable influence of cash value and other non-operating assets can negatively affect on result of final value of enterprise, reducing it. In the article two alternative approaches (separate and inclusive of cash value is presented. Also main determinants of estimating value of cash are described as well as potential threats of its valuation.

  13. Vectorization of the KENO V.a criticality safety code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Dodds, H.L.; Petrie, L.M.

    1991-01-01

    The development of the vector processor, which is used in the current generation of supercomputers and is beginning to be used in workstations, provides the potential for dramatic speed-up for codes that are able to process data as vectors. Unfortunately, the stochastic nature of Monte Carlo codes prevents the old scalar version of these codes from taking advantage of the vector processors. New Monte Carlo algorithms that process all the histories undergoing the same event as a batch are required. Recently, new vectorized Monte Carlo codes have been developed that show significant speed-ups when compared to the scalar version of themselves or equivalent codes. This paper discusses the vectorization of an already existing and widely used criticality safety code, KENO V.a All the changes made to KENO V.a are transparent to the user making it possible to upgrade from the standard scalar version of KENO V.a to the vectorized version without learning a new code

  14. Vector method for strain estimation in phase-sensitive optical coherence elastography

    Science.gov (United States)

    Matveyev, A. L.; Matveev, L. A.; Sovetsky, A. A.; Gelikonov, G. V.; Moiseev, A. A.; Zaitsev, V. Y.

    2018-06-01

    A noise-tolerant approach to strain estimation in phase-sensitive optical coherence elastography, robust to decorrelation distortions, is discussed. The method is based on evaluation of interframe phase-variation gradient, but its main feature is that the phase is singled out at the very last step of the gradient estimation. All intermediate steps operate with complex-valued optical coherence tomography (OCT) signals represented as vectors in the complex plane (hence, we call this approach the ‘vector’ method). In comparison with such a popular method as least-square fitting of the phase-difference slope over a selected region (even in the improved variant with amplitude weighting for suppressing small-amplitude noisy pixels), the vector approach demonstrates superior tolerance to both additive noise in the receiving system and speckle-decorrelation caused by tissue straining. Another advantage of the vector approach is that it obviates the usual necessity of error-prone phase unwrapping. Here, special attention is paid to modifications of the vector method that make it especially suitable for processing deformations with significant lateral inhomogeneity, which often occur in real situations. The method’s advantages are demonstrated using both simulated and real OCT scans obtained during reshaping of a collagenous tissue sample irradiated by an IR laser beam producing complex spatially inhomogeneous deformations.

  15. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  16. Space Vector Pulse Width Modulation of a Multi-Level Diode ...

    African Journals Online (AJOL)

    Space Vector Pulse Width Modulation of a Multi-Level Diode Clamped ... of MATLAB /SIMULINK modeling of the space vector pulse-width modulation and the ... two adjacent active vectors in determining the switching process of the multilevel ...

  17. Statistical processing of experimental data

    OpenAIRE

    NAVRÁTIL, Pavel

    2012-01-01

    This thesis contains theory of probability and statistical sets. Solved and unsolved problems of probability, random variable and distributions random variable, random vector, statistical sets, regression and correlation analysis. Unsolved problems contains solutions.

  18. Multithreading in vector processors

    Science.gov (United States)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    2018-01-16

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  19. Parallel and vector implementation of APROS simulator code

    International Nuclear Information System (INIS)

    Niemi, J.; Tommiska, J.

    1990-01-01

    In this paper the vector and parallel processing implementation of a general purpose simulator code is discussed. In this code the utilization of vector processing is straightforward. In addition to the loop level parallel processing, the functional decomposition and the domain decomposition have been considered. Results represented for a PWR-plant simulation illustrate the potential speed-up factors of the alternatives. It turns out that the loop level parallelism and the domain decomposition are the most promising alternative to employ the parallel processing. (author)

  20. EMMA: An Extensible Mammalian Modular Assembly Toolkit for the Rapid Design and Production of Diverse Expression Vectors.

    Science.gov (United States)

    Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi

    2017-07-21

    Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.

  1. Vector Directional Distance Rational Hybrid Filters for Color Image Restoration

    Directory of Open Access Journals (Sweden)

    L. Khriji

    2005-12-01

    Full Text Available A new class of nonlinear filters, called vector-directional distance rational hybrid filters (VDDRHF for multispectral image processing, is introduced and applied to color image-filtering problems. These filters are based on rational functions (RF. The VDDRHF filter is a two-stage filter, which exploits the features of the vector directional distance filter (VDDF, the center weighted vector directional distance filter (CWVDDF and those of the rational operator. The filter output is a result of vector rational function (VRF operating on the output of three sub-functions. Two vector directional distance (VDDF filters and one center weighted vector directional distance filter (CWVDDF are proposed to be used in the first stage due to their desirable properties, such as, noise attenuation, chromaticity retention, and edges and details preservation. Experimental results show that the new VDDRHF outperforms a number of widely known nonlinear filters for multi-spectral image processing such as the vector median filter (VMF, the generalized vector directional filters (GVDF and distance directional filters (DDF with respect to all criteria used.

  2. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  3. Variable ordering structures in vector optimization

    CERN Document Server

    Eichfelder, Gabriele

    2014-01-01

    This book provides an introduction to vector optimization with variable ordering structures, i.e., to optimization problems with a vector-valued objective function where the elements in the objective space are compared based on a variable ordering structure: instead of a partial ordering defined by a convex cone, we see a whole family of convex cones, one attached to each element of the objective space. The book starts by presenting several applications that have recently sparked new interest in these optimization problems, and goes on to discuss fundamentals and important results on a wide ra

  4. Random walkers with extreme value memory: modelling the peak-end rule

    Science.gov (United States)

    Harris, Rosemary J.

    2015-05-01

    Motivated by the psychological literature on the ‘peak-end rule’ for remembered experience, we perform an analysis within a random walk framework of a discrete choice model where agents’ future choices depend on the peak memory of their past experiences. In particular, we use this approach to investigate whether increased noise/disruption always leads to more switching between decisions. Here extreme value theory illuminates different classes of dynamics indicating that the long-time behaviour is dependent on the scale used for reflection; this could have implications, for example, in questionnaire design.

  5. Spatial birth-and-death processes in random environment

    OpenAIRE

    Fernandez, Roberto; Ferrari, Pablo A.; Guerberoff, Gustavo R.

    2004-01-01

    We consider birth-and-death processes of objects (animals) defined in ${\\bf Z}^d$ having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorit...

  6. Network formation determined by the diffusion process of random walkers

    International Nuclear Information System (INIS)

    Ikeda, Nobutoshi

    2008-01-01

    We studied the diffusion process of random walkers in networks formed by their traces. This model considers the rise and fall of links determined by the frequency of transports of random walkers. In order to examine the relation between the formed network and the diffusion process, a situation in which multiple random walkers start from the same vertex is investigated. The difference in diffusion rate of random walkers according to the difference in dimension of the initial lattice is very important for determining the time evolution of the networks. For example, complete subgraphs can be formed on a one-dimensional lattice while a graph with a power-law vertex degree distribution is formed on a two-dimensional lattice. We derived some formulae for predicting network changes for the 1D case, such as the time evolution of the size of nearly complete subgraphs and conditions for their collapse. The networks formed on the 2D lattice are characterized by the existence of clusters of highly connected vertices and their life time. As the life time of such clusters tends to be small, the exponent of the power-law distribution changes from γ ≅ 1-2 to γ ≅ 3

  7. Emerging Vector-Borne Diseases - Incidence through Vectors.

    Science.gov (United States)

    Savić, Sara; Vidić, Branka; Grgić, Zivoslav; Potkonjak, Aleksandar; Spasojevic, Ljubica

    2014-01-01

    Vector-borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowadays, in intercontinental countries, there is a struggle with emerging diseases, which have found their way to appear through vectors. Vector-borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens, and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector-borne infectious diseases and disease outbreaks. It could affect the range and population of pathogens, host and vectors, transmission season, etc. Reliable surveillance for diseases that are most likely to emerge is required. Canine vector-borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, ehrlichiosis, and leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fundamental role at primarily prevention and then treatment of vector-borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases. During a 4-year period, from 2009 to 2013, a total number of 551 dog samples were analyzed for vector-borne diseases (borreliosis, babesiosis, ehrlichiosis, anaplasmosis, dirofilariosis, and leishmaniasis) in routine laboratory work. The analysis was done by serological tests - ELISA for borreliosis, dirofilariosis, and leishmaniasis, modified Knott test for dirofilariosis, and blood smear for babesiosis, ehrlichiosis, and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on average more then half of the samples

  8. Random covering of the circle: the configuration-space of the free deposition process

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-12-12

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = {rho}, for some finite density {rho} of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Renyi's random sequential adsorption model.

  9. Facile fabrication of eco-friendly nano-mosquitocides: Biophysical characterization and effectiveness on neglected tropical mosquito vectors.

    Science.gov (United States)

    Govindarajan, Marimuthu; Hoti, S L; Benelli, Giovanni

    2016-12-01

    Mosquito (Diptera: Culicidae) vectors are solely responsible for transmitting important diseases such as malaria, dengue, chikungunya, Japanese encephalitis, lymphatic filariasis and Zika virus. Eco-friendly control tools of Culicidae vectors are a priority. In this study, we proposed a facile fabrication process of poly-disperse and stable silver nanoparticles (Ag NPs) using a cheap leaf extract of Ichnocarpus frutescens (Apocyanaceae). Bio-reduced Ag NPs were characterized by UV-vis spectrophotometry, Fourier transform infrared spectroscopy (FTIR), X-ray diffraction analysis (XRD), atomic force microscopy (AFM), scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The acute toxicity of I. frutescens leaf extract and green-synthesized Ag NPs was evaluated against larvae of the malaria vector Anopheles subpictus, the dengue vector Aedes albopictus and the Japanese encephalitis vector Culex tritaeniorhynchus. Compared to the leaf aqueous extract, Ag NPs showed higher toxicity against A. subpictus, A. albopictus, and C. tritaeniorhynchus with LC 50 values of 14.22, 15.84 and 17.26μg/mL, respectively. Ag NPs were found safer to non-target mosquito predators Anisops bouvieri, Diplonychus indicus and Gambusia affinis, with LC 50 values ranging from 636.61 to 2098.61μg/mL. Overall, this research firstly shed light on the mosquitocidal potential of I. frutescens, a potential bio-resource for rapid, cheap and effective synthesis of poly-disperse and highly stable silver nanocrystals. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Dynamic defense and network randomization for computer systems

    Science.gov (United States)

    Chavez, Adrian R.; Stout, William M. S.; Hamlet, Jason R.; Lee, Erik James; Martin, Mitchell Tyler

    2018-05-29

    The various technologies presented herein relate to determining a network attack is taking place, and further to adjust one or more network parameters such that the network becomes dynamically configured. A plurality of machine learning algorithms are configured to recognize an active attack pattern. Notification of the attack can be generated, and knowledge gained from the detected attack pattern can be utilized to improve the knowledge of the algorithms to detect a subsequent attack vector(s). Further, network settings and application communications can be dynamically randomized, wherein artificial diversity converts control systems into moving targets that help mitigate the early reconnaissance stages of an attack. An attack(s) based upon a known static address(es) of a critical infrastructure network device(s) can be mitigated by the dynamic randomization. Network parameters that can be randomized include IP addresses, application port numbers, paths data packets navigate through the network, application randomization, etc.

  11. Determining the efficacy of guppies and pyriproxyfen (Sumilarv® 2MR) combined with community engagement on dengue vectors in Cambodia: study protocol for a randomized controlled trial.

    Science.gov (United States)

    Hustedt, John; Doum, Dyna; Keo, Vanney; Ly, Sokha; Sam, BunLeng; Chan, Vibol; Alexander, Neal; Bradley, John; Prasetyo, Didot Budi; Rachmat, Agus; Muhammad, Shafique; Lopes, Sergio; Leang, Rithea; Hii, Jeffrey

    2017-08-04

    Evidence on the effectiveness of low-cost, sustainable, biological vector-control tools for the Aedes mosquitoes is limited. Therefore, the purpose of this trial is to estimate the impact of guppy fish (guppies), in combination with the use of the larvicide pyriproxyfen (Sumilarv® 2MR), and Communication for Behavioral Impact (COMBI) activities to reduce entomological indices in Cambodia. In this cluster randomized controlled, superiority trial, 30 clusters comprising one or more villages each (with approximately 170 households) will be allocated, in a 1:1:1 ratio, to receive either (1) three interventions (guppies, Sumilarv® 2MR, and COMBI activities), (2) two interventions (guppies and COMBI activities), or (3) control (standard vector control). Households will be invited to participate, and entomology surveys among 40 randomly selected households per cluster will be carried out quarterly. The primary outcome will be the population density of adult female Aedes mosquitoes (i.e., number per house) trapped using adult resting collections. Secondary outcome measures will include the House Index, Container Index, Breteau Index, Pupae Per House, Pupae Per Person, mosquito infection rate, guppy fish coverage, Sumilarv® 2MR coverage, and percentage of respondents with knowledge about Aedes mosquitoes causing dengue. In the primary analysis, adult female Aedes density and mosquito infection rates will be aggregated over follow-up time points to give a single rate per cluster. This will be analyzed by negative binomial regression, yielding density ratios. This trial is expected to provide robust estimates of the intervention effect. A rigorous evaluation of these vector-control interventions is vital to developing an evidence-based dengue control strategy and to help direct government resources. Current Controlled Trials, ID: ISRCTN85307778 . Registered on 25 October 2015.

  12. Random Matrices for Information Processing – A Democratic Vision

    DEFF Research Database (Denmark)

    Cakmak, Burak

    The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...

  13. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  14. Prediction of retention indices for frequently reported compounds of plant essential oils using multiple linear regression, partial least squares, and support vector machine.

    Science.gov (United States)

    Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng

    2013-08-01

    Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Vector Triggering Random Decrement for High Identification Accuracy

    DEFF Research Database (Denmark)

    Ibrahim, S. R.; Asmussen, J. C.; Brincker, Rune

    Using the Random Decrement (RD) technique to obtain free response estimates and combining this with time domain modal identification methods to obtain the poles and the mode shapes is acknowledged as a fast and accurate way of analysing measured responses of structures subject to ambient loads. W...

  16. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  17. Ensemble singular vectors and their use as additive inflation in EnKF

    Directory of Open Access Journals (Sweden)

    Shu-Chih Yang

    2015-07-01

    Full Text Available Given an ensemble of forecasts, it is possible to determine the leading ensemble singular vector (ESV, that is, the linear combination of the forecasts that, given the choice of the perturbation norm and forecast interval, will maximise the growth of the perturbations. Because the ESV indicates the directions of the fastest growing forecast errors, we explore the potential of applying the leading ESVs in ensemble Kalman filter (EnKF for correcting fast-growing errors. The ESVs are derived based on a quasi-geostrophic multi-level channel model, and data assimilation experiments are carried out under framework of the local ensemble transform Kalman filter. We confirm that even during the early spin-up starting with random initial conditions, the final ESVs of the first analysis with a 12-h window are strongly related to the background errors. Since initial ensemble singular vectors (IESVs grow much faster than Lyapunov Vectors (LVs, and the final ensemble singular vectors (FESVs are close to convergence to leading LVs, perturbations based on leading IESVs grow faster than those based on FESVs, and are therefore preferable as additive inflation. The IESVs are applied in the EnKF framework for constructing flow-dependent additive perturbations to inflate the analysis ensemble. Compared with using random perturbations as additive inflation, a positive impact from using ESVs is found especially in areas with large growing errors. When an EnKF is ‘cold-started’ from random perturbations and poor initial condition, results indicate that using the ESVs as additive inflation has the advantage of correcting large errors so that the spin-up of the EnKF can be accelerated.

  18. Improvement of product design process by knowledge value analysis

    OpenAIRE

    XU, Yang; BERNARD, Alain; PERRY, Nicolas; LAROCHE, Florent

    2013-01-01

    Nowadays, design activities remain the core issue for global product development. As knowledge is more and more integrated, effective analysis of knowledge value becomes very useful for the improvement of product design processes. This paper aims at proposing a framework of knowledge value analysis in the context of product design process. By theoretical analysis and case study, the paper illustrates how knowledge value can be calculated and how the results can help the improvement of product...

  19. Conservative rigid body dynamics by convected base vectors with implicit constraints

    DEFF Research Database (Denmark)

    Krenk, Steen; Nielsen, Martin Bjerre

    2014-01-01

    of differential equations without additional algebraic constraints on the base vectors. A discretized form of the equations of motion is obtained by starting from a finite time increment of the Hamiltonian, and retracing the steps of the continuous formulation in discrete form in terms of increments and mean...... of the base vectors. Orthogonality and unit length of the base vectors are imposed by constraining the equivalent Green strain components, and the kinetic energy is represented corresponding to rigid body motion. The equations of motion are obtained via Hamilton’s equations including the zero...... values over each integration time increment. In this discrete form the Lagrange multipliers are given in terms of a representative value within the integration time interval, and the equations of motion are recast into a conservative mean-value and finite difference format. The Lagrange multipliers...

  20. Vector-Tensor and Vector-Vector Decay Amplitude Analysis of B0→φK*0

    International Nuclear Information System (INIS)

    Aubert, B.; Bona, M.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P.; Poireau, V.; Tisserand, V.; Zghiche, A.; Grauges, E.; Palano, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.

    2007-01-01

    We perform an amplitude analysis of the decays B 0 →φK 2 * (1430) 0 , φK * (892) 0 , and φ(Kπ) S-wave 0 with a sample of about 384x10 6 BB pairs recorded with the BABAR detector. The fractions of longitudinal polarization f L of the vector-tensor and vector-vector decay modes are measured to be 0.853 -0.069 +0.061 ±0.036 and 0.506±0.040±0.015, respectively. Overall, twelve parameters are measured for the vector-vector decay and seven parameters for the vector-tensor decay, including the branching fractions and parameters sensitive to CP violation

  1. Numerical limitations in application of vector autoregressive modeling and Granger causality to analysis of EEG time series

    Science.gov (United States)

    Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.

    2007-11-01

    In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.

  2. Choice by value encoding and value construction : processes of loss aversion

    NARCIS (Netherlands)

    Willemsen, M.C.; Boeckenholt, U.; Johnson, E.J.

    2011-01-01

    Loss aversion and reference dependence are 2 keystones of behavioral theories of choice, but little is known about their underlying cognitive processes. We suggest an additional account for loss aversion that supplements the current account of the value encoding of attributes as gains or losses

  3. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  4. Neutral currents and electromagnetic renormalization of the vector part of neutrino weak interaction

    International Nuclear Information System (INIS)

    Folomeshkin, V.N.

    1976-01-01

    The nature and properties of neutral currents in neutrino processes at high energies are theoretically investigated. Electronagmetic renormalization of diagonal ((νsub(e)e(νsub(e)e) and (νsub(μ)μ)(νsub(μ)μ)) and nondiagonal ((νsub(e)μ)(νsub(e)μ)) interactions is discussed in terms of the universal fourfermion interaction model. It is shown that electromagnetic renormalization of neutrino vector interaction caused an effective appearance of vector neutral currents with photon isotopic structure. The value for the interaction constant is unambigously defined by the ratio of the total cross-section for electron-positron annihilation into muonic pairs. Interaction (renormalization) constants for neutral currents are pointed out to be always smaller than interaction constants for charge currents

  5. Instantaneous local wave vector estimation from multi-spacecraft measurements using few spatial points

    Directory of Open Access Journals (Sweden)

    T. D. Carozzi

    2004-07-01

    Full Text Available We introduce a technique to determine instantaneous local properties of waves based on discrete-time sampled, real-valued measurements from 4 or more spatial points. The technique is a generalisation to the spatial domain of the notion of instantaneous frequency used in signal processing. The quantities derived by our technique are closely related to those used in geometrical optics, namely the local wave vector and instantaneous phase velocity. Thus, this experimental technique complements ray-tracing. We provide example applications of the technique to electric field and potential data from the EFW instrument on Cluster. Cluster is the first space mission for which direct determination of the full 3-dimensional local wave vector is possible, as described here.

  6. Determination of key parameters of vector multifractal vector fields

    Science.gov (United States)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2017-12-01

    For too long time, multifractal analyses and simulations have been restricted to scalar-valued fields (Schertzer and Tchiguirinskaia, 2017a,b). For instance, the wind velocity multifractality has been mostly analysed in terms of scalar structure functions and with the scalar energy flux. This restriction has had the unfortunate consequences that multifractals were applicable to their full extent in geophysics, whereas it has inspired them. Indeed a key question in geophysics is the complexity of the interactions between various fields or they components. Nevertheless, sophisticated methods have been developed to determine the key parameters of scalar valued fields. In this communication, we first present the vector extensions of the universal multifractal analysis techniques to multifractals whose generator belong to a Levy-Clifford algebra (Schertzer and Tchiguirinskaia, 2015). We point out further extensions noting the increased complexity. For instance, the (scalar) index of multifractality becomes a matrice. Schertzer, D. and Tchiguirinskaia, I. (2015) `Multifractal vector fields and stochastic Clifford algebra', Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p. 123127. doi: 10.1063/1.4937364. Schertzer, D. and Tchiguirinskaia, I. (2017) `An Introduction to Multifractals and Scale Symmetry Groups', in Ghanbarian, B. and Hunt, A. (eds) Fractals: Concepts and Applications in Geosciences. CRC Press, p. (in press). Schertzer, D. and Tchiguirinskaia, I. (2017b) `Pandora Box of Multifractals: Barely Open ?', in Tsonis, A. A. (ed.) 30 Years of Nonlinear Dynamics in Geophysics. Berlin: Springer, p. (in press).

  7. Perbandingan Simple Logistic Classifier dengan Support Vector Machine dalam Memprediksi Kemenangan Atlet

    Directory of Open Access Journals (Sweden)

    Ednawati Rainarli

    2017-10-01

    Full Text Available A coach must be able to select which athlete has a good prospect of winning a game. There are a lot of aspects which influence the athlete in winning a game, so it's not easy by coach to decide it.This research would compare Simple Logistic Classifier (SLC and Support Vector Machine (SVM usage applied to predict winning game of athlete based on health and physical condition record. The data get from 28 sports. The accuracy of SLC and SVM are 80% and 88% meanwhile processing times of SLC and SVM method are 1.6 seconds dan 0.2 seconds.The result shows the SVM usage superior to the SLC both of speed process and the value of accuracy. There were also testing of 24 features used in the classifications process. Based on the test, features selection process can cause decreasing the accuracy value. This result concludes that all features used in this research influence the determination of a victory athletes prediction.

  8. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  9. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  10. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2005-01-01

    This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections

  11. Parallel/vector algorithms for the spherical SN transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.; Mattis, R.E.

    1990-01-01

    This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

  12. Acoustic communication in insect disease vectors

    Directory of Open Access Journals (Sweden)

    Felipe de Mello Vigoder

    2013-01-01

    Full Text Available Acoustic signalling has been extensively studied in insect species, which has led to a better understanding of sexual communication, sexual selection and modes of speciation. The significance of acoustic signals for a blood-sucking insect was first reported in the XIX century by Christopher Johnston, studying the hearing organs of mosquitoes, but has received relatively little attention in other disease vectors until recently. Acoustic signals are often associated with mating behaviour and sexual selection and changes in signalling can lead to rapid evolutionary divergence and may ultimately contribute to the process of speciation. Songs can also have implications for the success of novel methods of disease control such as determining the mating competitiveness of modified insects used for mass-release control programs. Species-specific sound “signatures” may help identify incipient species within species complexes that may be of epidemiological significance, e.g. of higher vectorial capacity, thereby enabling the application of more focussed control measures to optimise the reduction of pathogen transmission. Although the study of acoustic communication in insect vectors has been relatively limited, this review of research demonstrates their value as models for understanding both the functional and evolutionary significance of acoustic communication in insects.

  13. Single vector leptoquark production in e+e- and γe colliders

    International Nuclear Information System (INIS)

    Aliev, T.M.; Iltan, E.; Pak, N.K.

    1996-01-01

    We consider the single vector leptoquark (LQ) production at e + e - and γe colliders for two values of the center-of-mass energy √s=500GeV and √s=1000 GeV, in a model-independent framework. We find that the cross sections for the single gauge and nongauge vector LQ productions are almost equal. The discovery limit for a single vector LQ production is obtained for both cases. It is shown that in e + e - collisions the single vector LQ production is more favorable than the vector LQ pair production, if the Yukawa coupling constant is κ∼1. copyright 1996 The American Physical Society

  14. Integration profile and safety of an adenovirus hybrid-vector utilizing hyperactive sleeping beauty transposase for somatic integration.

    Directory of Open Access Journals (Sweden)

    Wenli Zhang

    Full Text Available We recently developed adenovirus/transposase hybrid-vectors utilizing the previously described hyperactive Sleeping Beauty (SB transposase HSB5 for somatic integration and we could show stabilized transgene expression in mice and a canine model for hemophilia B. However, the safety profile of these hybrid-vectors with respect to vector dose and genotoxicity remains to be investigated. Herein, we evaluated this hybrid-vector system in C57Bl/6 mice with escalating vector dose settings. We found that in all mice which received the hyperactive SB transposase, transgene expression levels were stabilized in a dose-dependent manner and that the highest vector dose was accompanied by fatalities in mice. To analyze potential genotoxic side-effects due to somatic integration into host chromosomes, we performed a genome-wide integration site analysis using linker-mediated PCR (LM-PCR and linear amplification-mediated PCR (LAM-PCR. Analysis of genomic DNA samples obtained from HSB5 treated female and male mice revealed a total of 1327 unique transposition events. Overall the chromosomal distribution pattern was close-to-random and we observed a random integration profile with respect to integration into gene and non-gene areas. Notably, when using the LM-PCR protocol, 27 extra-chromosomal integration events were identified, most likely caused by transposon excision and subsequent transposition into the delivered adenoviral vector genome. In total, this study provides a careful evaluation of the safety profile of adenovirus/Sleeping Beauty transposase hybrid-vectors. The obtained information will be useful when designing future preclinical studies utilizing hybrid-vectors in small and large animal models.

  15. Increasing the computational efficient of digital cross correlation by a vectorization method

    Science.gov (United States)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  16. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    Science.gov (United States)

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  17. Quantum nonlinear lattices and coherent state vectors

    DEFF Research Database (Denmark)

    Ellinas, Demosthenes; Johansson, M.; Christiansen, Peter Leth

    1999-01-01

    for the state vectors invokes the study of the Riemannian and symplectic geometry of the CSV manifolds as generalized phase spaces. Next, we investigate analytically and numerically the behavior of mean values and uncertainties of some physically interesting observables as well as the modifications...... (FP) model. Based on the respective dynamical symmetries of the models, a method is put forward which by use of the associated boson and spin coherent state vectors (CSV) and a factorization ansatz for the solution of the Schrodinger equation, leads to quasiclassical Hamiltonian equations of motion...... state vectors, and accounts for the quantum correlations of the lattice sites that develop during the time evolution of the systems. (C) 1999 Elsevier Science B.V. All rights reserved....

  18. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  19. Meta-analysis of the effects of insect vector saliva on host immune responses and infection of vector-transmitted pathogens: a focus on leishmaniasis.

    Directory of Open Access Journals (Sweden)

    Brittany Ockenfels

    2014-10-01

    Full Text Available A meta-analysis of the effects of vector saliva on the immune response and progression of vector-transmitted disease, specifically with regard to pathology, infection level, and host cytokine levels was conducted. Infection in the absence or presence of saliva in naïve mice was compared. In addition, infection in mice pre-exposed to uninfected vector saliva was compared to infection in unexposed mice. To control for differences in vector and pathogen species, mouse strain, and experimental design, a random effects model was used to compare the ratio of the natural log of the experimental to the control means of the studies. Saliva was demonstrated to enhance pathology, infection level, and the production of Th2 cytokines (IL-4 and IL-10 in naïve mice. This effect was observed across vector/pathogen pairings, whether natural or unnatural, and with single salivary proteins used as a proxy for whole saliva. Saliva pre-exposure was determined to result in less severe leishmaniasis pathology when compared with unexposed mice infected either in the presence or absence of sand fly saliva. The results of further analyses were not significant, but demonstrated trends toward protection and IFN-γ elevation for pre-exposed mice.

  20. Meta-analysis of the effects of insect vector saliva on host immune responses and infection of vector-transmitted pathogens: a focus on leishmaniasis.

    Science.gov (United States)

    Ockenfels, Brittany; Michael, Edwin; McDowell, Mary Ann

    2014-10-01

    A meta-analysis of the effects of vector saliva on the immune response and progression of vector-transmitted disease, specifically with regard to pathology, infection level, and host cytokine levels was conducted. Infection in the absence or presence of saliva in naïve mice was compared. In addition, infection in mice pre-exposed to uninfected vector saliva was compared to infection in unexposed mice. To control for differences in vector and pathogen species, mouse strain, and experimental design, a random effects model was used to compare the ratio of the natural log of the experimental to the control means of the studies. Saliva was demonstrated to enhance pathology, infection level, and the production of Th2 cytokines (IL-4 and IL-10) in naïve mice. This effect was observed across vector/pathogen pairings, whether natural or unnatural, and with single salivary proteins used as a proxy for whole saliva. Saliva pre-exposure was determined to result in less severe leishmaniasis pathology when compared with unexposed mice infected either in the presence or absence of sand fly saliva. The results of further analyses were not significant, but demonstrated trends toward protection and IFN-γ elevation for pre-exposed mice.

  1. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  2. Integrating Effectiveness, Transparency and Fairness into a Value Elicitation Process

    International Nuclear Information System (INIS)

    Fortier, Michael; Sheng, Grant

    2001-01-01

    As part of the evaluation of Canada's proposed nuclear fuel waste disposal concept, the Federal Environmental Assessment and Review Panel (FEARP) undertook an extensive, nation-wide public hearing process. The hearing process itself was contentious and has been criticized on numerous grounds. It is our contention that the fundamental weakness of the FEARP process was that it was designed as an information-based forum, as opposed to a value-based forum.' Our observations and analyses of these hearings indicate that the FEARP envisioned a different purpose and a different outcome of this process than the public in general. As a result, public acceptability for the Concept or even the assessment process itself was not garnered due to a failure in the process to identify, address and incorporate values. To address this, we proposed a seven-step value elicitation process specifically designed to assess public acceptability of the disposal concept. An unfortunate consequence of the flawed public consultation process employed by the FEARP is that it is unclear exactly what it is the public finds unacceptable. Both from discussions and observations, it is difficult to ascertain whether the unacceptability lies with the Concept itself and/or the process by which the Concept was to be assessed. As a result, there is uncertainty as to what questions should be asked and how should the 'unacceptability' be addressed. In other words, does Canada need a new concept? Does Canada need to develop a mechanism for assessing the public acceptability of the Concept? Or both? The inability of the current process to answer such fundamental questions demonstrates the importance of developing an effective public acceptability and consultation process. We submit that, to create an acceptable Public Participation mechanism, it is necessary to found the construction of such a mechanism on the principles of effectiveness, transparency and fairness. Moreover, we believe that the larger decision

  3. Integrating Effectiveness, Transparency and Fairness into a Value Elicitation Process

    Energy Technology Data Exchange (ETDEWEB)

    Fortier, Michael; Sheng, Grant [York Univ., Toronto, ON (Canada). Faculty of Environmental Studies; Collins, Alison [York Centre for Applied Sustainability, Toronto, ON (Canada)

    2001-07-01

    As part of the evaluation of Canada's proposed nuclear fuel waste disposal concept, the Federal Environmental Assessment and Review Panel (FEARP) undertook an extensive, nation-wide public hearing process. The hearing process itself was contentious and has been criticized on numerous grounds. It is our contention that the fundamental weakness of the FEARP process was that it was designed as an information-based forum, as opposed to a value-based forum.' Our observations and analyses of these hearings indicate that the FEARP envisioned a different purpose and a different outcome of this process than the public in general. As a result, public acceptability for the Concept or even the assessment process itself was not garnered due to a failure in the process to identify, address and incorporate values. To address this, we proposed a seven-step value elicitation process specifically designed to assess public acceptability of the disposal concept. An unfortunate consequence of the flawed public consultation process employed by the FEARP is that it is unclear exactly what it is the public finds unacceptable. Both from discussions and observations, it is difficult to ascertain whether the unacceptability lies with the Concept itself and/or the process by which the Concept was to be assessed. As a result, there is uncertainty as to what questions should be asked and how should the 'unacceptability' be addressed. In other words, does Canada need a new concept? Does Canada need to develop a mechanism for assessing the public acceptability of the Concept? Or both? The inability of the current process to answer such fundamental questions demonstrates the importance of developing an effective public acceptability and consultation process. We submit that, to create an acceptable Public Participation mechanism, it is necessary to found the construction of such a mechanism on the principles of effectiveness, transparency and fairness. Moreover, we believe that

  4. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

    Directory of Open Access Journals (Sweden)

    A. A. Zolotin

    2015-07-01

    Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

  5. Vector manifestation and matter formed in relativistic heavy-ion processes

    International Nuclear Information System (INIS)

    Brown, Gerald E.; Holt, Jeremy W.; Lee, Chang-Hwan; Rho, Mannque

    2007-01-01

    Recent developments in our description of RHIC and related heavy-ion phenomena in terms of hidden local symmetry theories are reviewed with a focus on the novel nearly massless states in the vicinity of-both below and above-the chiral restoration temperature T c . We present complementary and intuitive ways to understand both Harada-Yamawaki's vector manifestation structure and Brown-Rho scaling-which are closely related-in terms of 'melting' of soft glues observed in lattice calculations and join the massless modes that arise in the vector manifestation (in the chiral limit) just below T c to tightly bound massless states above T c . This phenomenon may be interpreted in terms of the Beg-Shei theorem. It is suggested that hidden local symmetry theories arise naturally in holographic dual QCD from string theory, and a clear understanding of what really happens near the critical point could come from a deeper understanding of the dual bulk theory. Other matters discussed are the relation between Brown-Rho scaling and Landau Fermi-liquid fixed point parameters at the equilibrium density, its implications for 'low-mass dileptons' produced in heavy-ion collisions, the reconstruction of vector mesons in peripheral collisions, the pion velocity in the vicinity of the chiral transition point, kaon condensation viewed from the VM fixed point, nuclear physics with Brown-Rho scaling, and the generic feature of dropping masses at the RGE fixed points in generalized hidden local symmetry theories

  6. An empirical test of pseudo random number generators by means of an exponential decaying process

    International Nuclear Information System (INIS)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A.; Mora F, L.E.

    2007-01-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  7. Thrust Vectoring of a Continuous Rotating Detonation Engine by Changing the Local Injection Pressure

    International Nuclear Information System (INIS)

    Liu Shi-Jie; Lin Zhi-Yong; Sun Ming-Bo; Liu Wei-Dong

    2011-01-01

    The thrust vectoring ability of a continuous rotating detonation engine is numerically investigated, which is realized via increasing local injection stagnation pressure of half of the simulation domain compared to the other half. Under the homogeneous injection condition, both the flow-field structure and the detonation wave propagation process are analyzed. Due to the same injection condition along the inlet boundary, the outlines of fresh gas zones at different moments are similar to each other. The main flow-field features under thrust vectoring cases are similar to that under the baseline condition. However, due to the heterogeneous injection system, both the height of the fresh gas zone and the pressure value of the fresh gas in the high injection pressure zone are larger than that in the low injection pressure zone. Thus the average pressure in half of the engine is larger than that in the other half and the thrust vectoring adjustment is realized. (fundamental areas of phenomenology(including applications))

  8. Probabilistic Extraction Of Vectors In PIV

    Science.gov (United States)

    Humphreys, William M., Jr.

    1994-01-01

    Probabilistic technique for extraction of velocity vectors in particle-image velocimetry (PIV) implemented with much less computation. Double-exposure photograph of particles in flow illuminated by sheet of light provides data on velocity field of flow. Photograph converted into video image then digitized and processed by computer into velocity-field data. Velocity vectors in interrogation region chosen from magnitude and angle histograms constructed from centroid map of region.

  9. Generation and monitoring of a discrete stable random process

    CERN Document Server

    Hopcraft, K I; Matthews, J O

    2002-01-01

    A discrete stochastic process with stationary power law distribution is obtained from a death-multiple immigration population model. Emigrations from the population form a random series of events which are monitored by a counting process with finite-dynamic range and response time. It is shown that the power law behaviour of the population is manifested in the intermittent behaviour of the series of events. (letter to the editor)

  10. Scaling behaviour of randomly alternating surface growth processes

    CERN Document Server

    Raychaudhuri, S

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depe...

  11. Kochen-Specker vectors

    International Nuclear Information System (INIS)

    Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

    2005-01-01

    We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

  12. Organizational Development: Values, Process, and Technology.

    Science.gov (United States)

    Margulies, Newton; Raia, Anthony P.

    The current state-of-the-art of organizational development is the focus of this book. The five parts into which the book is divided are as follows: Part One--Introduction (Organizational Development in Perspective--the nature, values, process, and technology of organizational development); Part Two--The Components of Organizational Developments…

  13. The Place of Values in Counselling Process | O.S. | Nigerian Journal ...

    African Journals Online (AJOL)

    This study examined the place of value in counselling process. Counselling process is defined as the effortful steps taken to effect value oriented professional redirection of defective behaviour attributes in clients. In categorising the values, five typologies were identified. These are the values subsumed in nature as it ...

  14. Holographic vector superconductor in Gauss–Bonnet gravity

    Directory of Open Access Journals (Sweden)

    Jun-Wang Lu

    2016-02-01

    Full Text Available In the probe limit, we numerically study the holographic p-wave superconductor phase transitions in the higher curvature theory. Concretely, we study the influences of Gauss–Bonnet parameter α on the Maxwell complex vector model (MCV in the five-dimensional Gauss–Bonnet–AdS black hole and soliton backgrounds, respectively. In the two backgrounds, the improving Gauss–Bonnet parameter α and dimension of the vector operator Δ inhibit the vector condensate. In the black hole, the condensate quickly saturates a stable value at lower temperature. Moreover, both the stable value of condensate and the ratio ωg/Tc increase with α. In the soliton, the location of the second pole of the imaginary part increases with α, which implies that the energy of the quasiparticle excitation increases with the improving higher curvature correction. In addition, the influences of the Gauss–Bonnet correction on the MCV model are similar to the ones on the SU(2 p-wave model, which confirms that the MCV model is a generalization of the SU(2 Yang–Mills model even without the applied magnetic field to some extent.

  15. Bioreactor production of recombinant herpes simplex virus vectors.

    Science.gov (United States)

    Knop, David R; Harrell, Heather

    2007-01-01

    Serotypical application of herpes simplex virus (HSV) vectors to gene therapy (type 1) and prophylactic vaccines (types 1 and 2) has garnered substantial clinical interest recently. HSV vectors and amplicons have also been employed as helper virus constructs for manufacture of the dependovirus adeno-associated virus (AAV). Large quantities of infectious HSV stocks are requisite for these therapeutic applications, requiring a scalable vector manufacturing and processing platform comprised of unit operations which accommodate the fragility of HSV. In this study, production of a replication deficient rHSV-1 vector bearing the rep and cap genes of AAV-2 (denoted rHSV-rep2/cap2) was investigated. Adaptation of rHSV production from T225 flasks to a packed bed, fed-batch bioreactor permitted an 1100-fold increment in total vector production without a decrease in specific vector yield (pfu/cell). The fed-batch bioreactor system afforded a rHSV-rep2/cap2 vector recovery of 2.8 x 10(12) pfu. The recovered vector was concentrated by tangential flow filtration (TFF), permitting vector stocks to be formulated at greater than 1.5 x 10(9) pfu/mL.

  16. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  17. Growth rate for the expected value of a generalized random Fibonacci sequence

    International Nuclear Information System (INIS)

    Janvresse, Elise; De la Rue, Thierry; Rittaud, BenoIt

    2009-01-01

    We study the behaviour of generalized random Fibonacci sequences defined by the relation g n = |λg n-1 ± g n-2 |, where the ± sign is given by tossing an unbalanced coin, giving probability p to the + sign. We prove that the expected value of g n grows exponentially fast for any 0 (2 - λ)/4 when λ is of the form 2cos(π/k) for some fixed integer k ≥ 3. In both cases, we give an algebraic expression for the growth rate

  18. The impact of processing delay on the exposure index value

    Science.gov (United States)

    Butler, M. L.; Brennan, P. C.; Last, J.; Rainford, L.

    2010-04-01

    Digital radiography poses the risk of unnoticed increases in patient dose. Manufacturers responded to this by offering an exposure index (EI) value to clinicians. Use of the EI value in clinical practice is encouraged by the American College of Radiology and American Association of Physicists in Medicine. This study assesses the impact of processing delay on the EI value. An anthropormorphic phantom was used to simulate three radiographic examinations; skull, pelvis and chest. For each examination, the phantom was placed in the optimal position and exposures were chosen in accordance with international guidelines. A Carestream (previously Kodak) computed radiography system was used. The imaging plate was exposed, and processing was delayed in various increments from 30 seconds to 24 hours, representing common delays in clinical practice. The EI value was recorded for each exposure. The EI value decreased considerably with increasing processing delay. The EI value decreased by 100 within 25 minutes delay for the chest, and 20 minutes for the skull and pelvis. Within 1 hour, the EI value had fallen by 180, 160 and 100 for the chest, skull and pelvis respectively. After 24 hours, the value had decreased by 370, 350 and 340 for the chest, skull and pelvis respectively, representing to the clinician more then a halving of exposure to the detector in Carestream systems. The assessment of images using EI values should be approached with caution in clinical practice when delays in processing occur. The use of EI values as a feedback mechanism is questioned.

  19. The Need to Assess Public Values in a Site Selection Process

    International Nuclear Information System (INIS)

    Sheng, Grant; Fortier, Michael

    2001-01-01

    Siting a nuclear fuel waste disposal facility is highly problematic for both technical and nontechnical reasons. The majority of countries using nuclear energy and many in the scientific community favour burying the spent fuel deep in a stable geological formation. It is our contention that site selection of a disposal facility must consider social, political, spatial and scientific perspectives in a comprehensive and integrated fashion in order to achieve a successful process. Moreover, we submit that people's values must be explicitly recognized and be taken into account through a formalized process during deliberations on the disposal concept, the process of evaluation of the concept, and the site selection process. The purpose of this paper is: (1) to identify the importance of recognizing people's values in the process of determining 'public acceptability', (2) to outline a possible framework by which public acceptability can be gauged through a formalized process of value elicitation, and (3) to introduce a novel method by which a web-based geographic information systems (GIS) application can be used as a tool for value elicitation. In order to assess effectively the public acceptability of Canada's nuclear waste disposal concept, we submit that such a process must examine the underlying values that are held by the public. Moreover, the evaluation process of Canada's concept demonstrates that an acceptable process is a pre-condition for an acceptable result, although such a process does not necessarily guarantee an acceptable result. However, the consequences of a flawed process can be very significant, as shown by Canada's experience. This paper also provides a brief overview of a value elicitation process that, in our opinion, could be used to assess the public acceptability of the Concept. We also describe how a web-based GIS application could be used to infer some of the underlying values held by the public

  20. Order out of Randomness: Self-Organization Processes in Astrophysics

    Science.gov (United States)

    Aschwanden, Markus J.; Scholkmann, Felix; Béthune, William; Schmutz, Werner; Abramenko, Valentina; Cheung, Mark C. M.; Müller, Daniel; Benz, Arnold; Chernov, Guennadi; Kritsuk, Alexei G.; Scargle, Jeffrey D.; Melatos, Andrew; Wagoner, Robert V.; Trimble, Virginia; Green, William H.

    2018-03-01

    Self-organization is a property of dissipative nonlinear processes that are governed by a global driving force and a local positive feedback mechanism, which creates regular geometric and/or temporal patterns, and decreases the entropy locally, in contrast to random processes. Here we investigate for the first time a comprehensive number of (17) self-organization processes that operate in planetary physics, solar physics, stellar physics, galactic physics, and cosmology. Self-organizing systems create spontaneous " order out of randomness", during the evolution from an initially disordered system to an ordered quasi-stationary system, mostly by quasi-periodic limit-cycle dynamics, but also by harmonic (mechanical or gyromagnetic) resonances. The global driving force can be due to gravity, electromagnetic forces, mechanical forces (e.g., rotation or differential rotation), thermal pressure, or acceleration of nonthermal particles, while the positive feedback mechanism is often an instability, such as the magneto-rotational (Balbus-Hawley) instability, the convective (Rayleigh-Bénard) instability, turbulence, vortex attraction, magnetic reconnection, plasma condensation, or a loss-cone instability. Physical models of astrophysical self-organization processes require hydrodynamic, magneto-hydrodynamic (MHD), plasma, or N-body simulations. Analytical formulations of self-organizing systems generally involve coupled differential equations with limit-cycle solutions of the Lotka-Volterra or Hopf-bifurcation type.

  1. Statistics of the derivatives of complex signal derived from Riesz transform and its application to pseudo-Stokes vector correlation for speckle displacement measurement

    DEFF Research Database (Denmark)

    Zhang, Shun; Yang, Yi; Hanson, Steen Grüner

    2015-01-01

    for the superiority of the proposed PSVC technique, we study the statistical properties of the spatial derivatives of the complex signal representation generated from the Riesz transform. Under the assumption of a Gaussian random process, a theoretical analysis for the pseudo Stokes vector correlation has been...... provided. Based on these results, we show mathematically that PSVC has a performance advantage over conventional intensity-based correlation technique....

  2. Quantum randomness and unpredictability

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Gregg [Quantum Communication and Measurement Laboratory, Department of Electrical and Computer Engineering and Division of Natural Science and Mathematics, Boston University, Boston, MA (United States)

    2017-06-15

    Quantum mechanics is a physical theory supplying probabilities corresponding to expectation values for measurement outcomes. Indeed, its formalism can be constructed with measurement as a fundamental process, as was done by Schwinger, provided that individual measurements outcomes occur in a random way. The randomness appearing in quantum mechanics, as with other forms of randomness, has often been considered equivalent to a form of indeterminism. Here, it is argued that quantum randomness should instead be understood as a form of unpredictability because, amongst other things, indeterminism is not a necessary condition for randomness. For concreteness, an explication of the randomness of quantum mechanics as the unpredictability of quantum measurement outcomes is provided. Finally, it is shown how this view can be combined with the recently introduced view that the very appearance of individual quantum measurement outcomes can be grounded in the Plenitude principle of Leibniz, a principle variants of which have been utilized in physics by Dirac and Gell-Mann in relation to the fundamental processes. This move provides further support to Schwinger's ''symbolic'' derivation of quantum mechanics from measurement. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Researches on Key Algorithms in Analogue Seismogram Records Vectorization

    Directory of Open Access Journals (Sweden)

    Maofa WANG

    2014-09-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In our study, a new tracing algorithm for simulated seismogram curves based on visual filed feature is presented. We also give out the technological process to vectorizing simulated seismograms, and an analog seismic record vectorization system has been accomplished independently. Using it, we can precisely and speedy vectorize analog seismic records (need professionals to participate interactively.

  4. Choice by Value Encoding and Value Construction: Processes of Loss Aversion

    Science.gov (United States)

    Willemsen, Martijn C.; Bockenholt, Ulf; Johnson, Eric J.

    2011-01-01

    Loss aversion and reference dependence are 2 keystones of behavioral theories of choice, but little is known about their underlying cognitive processes. We suggest an additional account for loss aversion that supplements the current account of the value encoding of attributes as gains or losses relative to a reference point, introducing a value…

  5. What value, detection limits

    International Nuclear Information System (INIS)

    Currie, L.A.

    1986-01-01

    Specific approaches and applications of LLD's to nuclear and ''nuclear-related'' measurements are presented in connection with work undertaken for the U.S. Nuclear Regulatory Commission and the International Atomic Energy Agency. In this work, special attention was given to assumptions and potential error sources, as well as to different types of analysis. For the former, the authors considered random and systematic error associated with the blank and the calibration and sample preparation processes, as well as issues relating to the nature of the random error distributions. Analysis types considered included continuous monitoring, ''simple counting'' involving scalar quantities, and spectrum fitting involving data vectors. The investigation of data matrices and multivariate analysis is also described. The most important conclusions derived from this study are: that there is a significant lack of communication and compatibility resulting from diverse terminology and conceptual bases - including no-basis ''ad hoc'' definitions; that the distinction between detection decisions and detection limits is frequently lost sight of; and that quite erroneous LOD estimates follow from inadequate consideration of the actual variability of the blank, and systematic error associated with the blank, the calibration-recovery factor, matrix effects, and ''black box'' data reduction models

  6. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  7. Randomized and quantum algorithms for solving initial-value problems in ordinary differential equations of order k

    Directory of Open Access Journals (Sweden)

    Maciej Goćwin

    2008-01-01

    Full Text Available The complexity of initial-value problems is well studied for systems of equations of first order. In this paper, we study the \\(\\varepsilon\\-complexity for initial-value problems for scalar equations of higher order. We consider two models of computation, the randomized model and the quantum model. We construct almost optimal algorithms adjusted to scalar equations of higher order, without passing to systems of first order equations. The analysis of these algorithms allows us to establish upper complexity bounds. We also show (almost matching lower complexity bounds. The \\(\\varepsilon\\-complexity in the randomized and quantum setting depends on the regularity of the right-hand side function, but is independent of the order of equation. Comparing the obtained bounds with results known in the deterministic case, we see that randomized algorithms give us a speed-up by \\(1/2\\, and quantum algorithms by \\(1\\ in the exponent. Hence, the speed-up does not depend on the order of equation, and is the same as for the systems of equations of first order. We also include results of some numerical experiments which confirm theoretical results.

  8. Random migration processes between two stochastic epidemic centers.

    Science.gov (United States)

    Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B

    2016-04-01

    We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    Directory of Open Access Journals (Sweden)

    Feng-Que Pei

    Full Text Available Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  10. Evaluation of larvicidal activity of Acalypha alnifolia Klein ex Willd. (Euphorbiaceae) leaf extract against the malarial vector, Anopheles stephensi, dengue vector, Aedes aegypti and Bancroftian filariasis vector, Culex quinquefasciatus (Diptera: Culicidae).

    Science.gov (United States)

    Kovendan, Kalimuthu; Murugan, Kadarkarai; Vincent, Savariar

    2012-02-01

    The leaf extract of Acalypha alnifolia with different solvents - hexane, chloroform, ethyl acetate, acetone and methanol - were tested for larvicidal activity against three important mosquitoes such as malarial vector, Anopheles stephensi, dengue vector, Aedes aegypti and Bancroftian filariasis vector, Culex quinquefasciatus. The medicinal plants were collected from the area around Kallar Hills near the Western Ghats, Coimbatore, India. A. alnifolia plant was washed with tap water and shade dried at room temperature. The dried leaves were powdered mechanically using commercial electrical stainless steel blender. The powder 800 g of the leaf material was extract with 2.5 litre of various each organic solvents such as hexane, chloroform, ethyl acetate, acetone, methanol for 8 h using Soxhlet apparatus, and filtered. The crude plant extracts were evaporated to dryness in a rotary vacuum evaporator. The yield of extracts was hexane (8.64 g), chloroform (10.74 g), ethyl acetate (9.14 g), acetone (10.02 g), and methanol (11.43 g). One gram of the each plant residue was dissolved separately in 100 ml of acetone (stock solution) from which different concentrations, i.e., 50, 150, 250, 350 and 450 ppm, was prepared. The hexane, chloroform, ethyl acetate, acetone was moderate considerable mortality; however, the highest larval mortality was methanolic extract observed in three mosquito vectors. The larval mortality was observed after 24 h exposure. No mortality was observed in the control. The early fourth-instar larvae of A. stephensi had values of LC(50) = 197.37, 178.75, 164.34, 149.90 and 125.73 ppm and LC(90) = 477.60, 459.21, 435.07, 416.20 and 395.50 ppm, respectively. The A. aegypti had values of LC(50) = 202.15, 182.58, 160.35, 146.07 and 128.55 ppm and LC(90) = 476.57, 460.83, 440.78, 415.38 and 381.67 ppm, respectively. The C. quinquefasciatus had values of LC(50) = 198.79, 172.48, 151.06, 140.69 and 127.98 ppm and LC(90) = 458.73, 430

  11. Large-scale adenovirus and poxvirus-vectored vaccine manufacturing to enable clinical trials.

    Science.gov (United States)

    Kallel, Héla; Kamen, Amine A

    2015-05-01

    Efforts to make vaccines against infectious diseases and immunotherapies for cancer have evolved to utilize a variety of heterologous expression systems such as viral vectors. These vectors are often attenuated or engineered to safely deliver genes encoding antigens of different pathogens. Adenovirus and poxvirus vectors are among the viral vectors that are most frequently used to develop prophylactic vaccines against infectious diseases as well as therapeutic cancer vaccines. This mini-review describes the trends and processes in large-scale production of adenovirus and poxvirus vectors to meet the needs of clinical applications. We briefly describe the general principles for the production and purification of adenovirus and poxvirus viral vectors. Currently, adenovirus and poxvirus vector manufacturing methods rely on well-established cell culture technologies. Several improvements have been evaluated to increase the yield and to reduce the overall manufacturing cost, such as cultivation at high cell densities and continuous downstream processing. Additionally, advancements in vector characterization will greatly facilitate the development of novel vectored vaccine candidates. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Entropy-Based Video Steganalysis of Motion Vectors

    Directory of Open Access Journals (Sweden)

    Elaheh Sadat Sadat

    2018-04-01

    Full Text Available In this paper, a new method is proposed for motion vector steganalysis using the entropy value and its combination with the features of the optimized motion vector. In this method, the entropy of blocks is calculated to determine their texture and the precision of their motion vectors. Then, by using a fuzzy cluster, the blocks are clustered into the blocks with high and low texture, while the membership function of each block to a high texture class indicates the texture of that block. These membership functions are used to weight the effective features that are extracted by reconstructing the motion estimation equations. Characteristics of the results indicate that the use of entropy and the irregularity of each block increases the precision of the final video classification into cover and stego classes.

  13. Per-field crop classification in irrigated agricultural regions in middle Asia using random forest and support vector machine ensemble

    Science.gov (United States)

    Löw, Fabian; Schorcht, Gunther; Michel, Ulrich; Dech, Stefan; Conrad, Christopher

    2012-10-01

    Accurate crop identification and crop area estimation are important for studies on irrigated agricultural systems, yield and water demand modeling, and agrarian policy development. In this study a novel combination of Random Forest (RF) and Support Vector Machine (SVM) classifiers is presented that (i) enhances crop classification accuracy and (ii) provides spatial information on map uncertainty. The methodology was implemented over four distinct irrigated sites in Middle Asia using RapidEye time series data. The RF feature importance statistics was used as feature-selection strategy for the SVM to assess possible negative effects on classification accuracy caused by an oversized feature space. The results of the individual RF and SVM classifications were combined with rules based on posterior classification probability and estimates of classification probability entropy. SVM classification performance was increased by feature selection through RF. Further experimental results indicate that the hybrid classifier improves overall classification accuracy in comparison to the single classifiers as well as useŕs and produceŕs accuracy.

  14. Unsupervised learning of binary vectors: A Gaussian scenario

    International Nuclear Information System (INIS)

    Copelli, Mauro; Van den Broeck, Christian

    2000-01-01

    We study a model of unsupervised learning where the real-valued data vectors are isotropically distributed, except for a single symmetry-breaking binary direction B(set-membership sign){-1,+1} N , onto which the projections have a Gaussian distribution. We show that a candidate vector J undergoing Gibbs learning in this discrete space, approaches the perfect match J=B exponentially. In addition to the second-order ''retarded learning'' phase transition for unbiased distributions, we show that first-order transitions can also occur. Extending the known result that the center of mass of the Gibbs ensemble has Bayes-optimal performance, we show that taking the sign of the components of this vector (clipping) leads to the vector with optimal performance in the binary space. These upper bounds are shown generally not to be saturated with the technique of transforming the components of a special continuous vector, except in asymptotic limits and in a special linear case. Simulations are presented which are in excellent agreement with the theoretical results. (c) 2000 The American Physical Society

  15. High values of disorder-generated multifractals and logarithmically correlated processes

    International Nuclear Information System (INIS)

    Fyodorov, Yan V.; Giraud, Olivier

    2015-01-01

    In the introductory section of the article we give a brief account of recent insights into statistics of high and extreme values of disorder-generated multifractals following a recent work by the first author with P. Le Doussal and A. Rosso (FLR) employing a close relation between multifractality and logarithmically correlated random fields. We then substantiate some aspects of the FLR approach analytically for multifractal eigenvectors in the Ruijsenaars–Schneider ensemble (RSE) of random matrices introduced by E. Bogomolny and the second author by providing an ab initio calculation that reveals hidden logarithmic correlations at the background of the disorder-generated multifractality. In the rest we investigate numerically a few representative models of that class, including the study of the highest component of multifractal eigenvectors in the Ruijsenaars–Schneider ensemble

  16. Comparison of random forests and support vector machine for real-time radar-derived rainfall forecasting

    Science.gov (United States)

    Yu, Pao-Shan; Yang, Tao-Chang; Chen, Szu-Yin; Kuo, Chen-Min; Tseng, Hung-Wei

    2017-09-01

    This study aims to compare two machine learning techniques, random forests (RF) and support vector machine (SVM), for real-time radar-derived rainfall forecasting. The real-time radar-derived rainfall forecasting models use the present grid-based radar-derived rainfall as the output variable and use antecedent grid-based radar-derived rainfall, grid position (longitude and latitude) and elevation as the input variables to forecast 1- to 3-h ahead rainfalls for all grids in a catchment. Grid-based radar-derived rainfalls of six typhoon events during 2012-2015 in three reservoir catchments of Taiwan are collected for model training and verifying. Two kinds of forecasting models are constructed and compared, which are single-mode forecasting model (SMFM) and multiple-mode forecasting model (MMFM) based on RF and SVM. The SMFM uses the same model for 1- to 3-h ahead rainfall forecasting; the MMFM uses three different models for 1- to 3-h ahead forecasting. According to forecasting performances, it reveals that the SMFMs give better performances than MMFMs and both SVM-based and RF-based SMFMs show satisfactory performances for 1-h ahead forecasting. However, for 2- and 3-h ahead forecasting, it is found that the RF-based SMFM underestimates the observed radar-derived rainfalls in most cases and the SVM-based SMFM can give better performances than RF-based SMFM.

  17. Production of polarized vector mesons off nuclei

    International Nuclear Information System (INIS)

    Kopeliovich, B. Z.; Nemchik, J.; Schmidt, Ivan

    2007-01-01

    Using the light-cone QCD dipole formalism we investigate manifestations of color transparency (CT) and coherence length (CL) effects in electroproduction of longitudinally (L) and transversally (T) polarized vector mesons. Motivated by forthcoming data from the HERMES experiment we predict both the A and Q 2 dependence of the L/T ratios for ρ 0 mesons produced coherently and incoherently off nuclei. For an incoherent reaction the CT and CL effects add up and result in a monotonic A dependence of the L/T ratio at different values of Q 2 . In contrast, for a coherent process the contraction of the CL with Q 2 causes an effect opposite to that of CT and we expect quite a nontrivial A dependence

  18. Timing of the Crab pulsar III. The slowing down and the nature of the random process

    International Nuclear Information System (INIS)

    Groth, E.J.

    1975-01-01

    The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated

  19. Continuous state branching processes in random environment: The Brownian case

    OpenAIRE

    Palau, Sandra; Pardo, Juan Carlos

    2015-01-01

    We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...

  20. Line Width Recovery after Vectorization of Engineering Drawings

    Directory of Open Access Journals (Sweden)

    Gramblička Matúš

    2016-12-01

    Full Text Available Vectorization is the conversion process of a raster image representation into a vector representation. The contemporary commercial vectorization software applications do not provide sufficiently high quality outputs for such images as do mechanical engineering drawings. Line width preservation is one of the problems. There are applications which need to know the line width after vectorization because this line attribute carries the important semantic information for the next 3D model generation. This article describes the algorithm that is able to recover line width of individual lines in the vectorized engineering drawings. Two approaches are proposed, one examines the line width at three points, whereas the second uses a variable number of points depending on the line length. The algorithm is tested on real mechanical engineering drawings.

  1. On Discrete Killing Vector Fields and Patterns on Surfaces

    KAUST Repository

    Ben-Chen, Mirela

    2010-09-21

    Symmetry is one of the most important properties of a shape, unifying form and function. It encodes semantic information on one hand, and affects the shape\\'s aesthetic value on the other. Symmetry comes in many flavors, amongst the most interesting being intrinsic symmetry, which is defined only in terms of the intrinsic geometry of the shape. Continuous intrinsic symmetries can be represented using infinitesimal rigid transformations, which are given as tangent vector fields on the surface - known as Killing Vector Fields. As exact symmetries are quite rare, especially when considering noisy sampled surfaces, we propose a method for relaxing the exact symmetry constraint to allow for approximate symmetries and approximate Killing Vector Fields, and show how to discretize these concepts for generating such vector fields on a triangulated mesh. We discuss the properties of approximate Killing Vector Fields, and propose an application to utilize them for texture and geometry synthesis. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  2. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  3. Technology and developments for the Random Positioning Machine, RPM

    NARCIS (Netherlands)

    Borst, A.G.; van Loon, J.J.W.A.

    2009-01-01

    A Random Positioning Machine (RPM) is a laboratory instrument to provide continuous random change in orientation relative to the gravity vector of an accommodated (biological) experiment. The use of the RPM can generate eff ects comparable to the eff ects of true microgravity when the changes in

  4. Magnetic field vector and electron density diagnostics from linear polarization measurements in 14 solar prominences

    Science.gov (United States)

    Bommier, V.

    1986-01-01

    The Hanle effect is the modification of the linear polarization parameters of a spectral line due to the effect of the magnetic field. It has been successfully applied to the magnetic field vector diagnostic in solar prominences. The magnetic field vector is determined by comparing the measured polarization to the polarization computed, taking into account all the polarizing and depolarizing processes in line formation and the depolarizing effect of the magnetic field. The method was applied to simultaneous polarization measurements in the Helium D3 line and in the hydrogen beta line in 14 prominences. Four polarization parameters are measured, which lead to the determination of the three coordinates of the magnetic field vector and the electron density, owing to the sensitivity of the hydrogen beta line to the non-negligible effect of depolarizing collisions with electrons and protons of the medium. A mean value of 1.3 x 10 to the 10th power cu. cm. is derived in 14 prominences.

  5. Genetic manipulation of endosymbionts to control vector and vector borne diseases

    Directory of Open Access Journals (Sweden)

    Jay Prakash Gupta

    Full Text Available Vector borne diseases (VBD are on the rise because of failure of the existing methods of control of vector and vector borne diseases and the climate change. A steep rise of VBDs are due to several factors like selection of insecticide resistant vector population, drug resistant parasite population and lack of effective vaccines against the VBDs. Environmental pollution, public health hazard and insecticide resistant vector population indicate that the insecticides are no longer a sustainable control method of vector and vector-borne diseases. Amongst the various alternative control strategies, symbiont based approach utilizing endosymbionts of arthropod vectors could be explored to control the vector and vector borne diseases. The endosymbiont population of arthropod vectors could be exploited in different ways viz., as a chemotherapeutic target, vaccine target for the control of vectors. Expression of molecules with antiparasitic activity by genetically transformed symbiotic bacteria of disease-transmitting arthropods may serve as a powerful approach to control certain arthropod-borne diseases. Genetic transformation of symbiotic bacteria of the arthropod vector to alter the vector’s ability to transmit pathogen is an alternative means of blocking the transmission of VBDs. In Indian scenario, where dengue, chikungunya, malaria and filariosis are prevalent, paratransgenic based approach can be used effectively. [Vet World 2012; 5(9.000: 571-576

  6. The Need to Assess Public Values in a Site Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Grant; Fortier, Michael [York Univ., Toronto, ON (Canada). Faculty of Environmental Studies; Collins, Alison [York Centre for Applied Sustainability, Toronto, ON (Canada)

    2001-07-01

    Siting a nuclear fuel waste disposal facility is highly problematic for both technical and nontechnical reasons. The majority of countries using nuclear energy and many in the scientific community favour burying the spent fuel deep in a stable geological formation. It is our contention that site selection of a disposal facility must consider social, political, spatial and scientific perspectives in a comprehensive and integrated fashion in order to achieve a successful process. Moreover, we submit that people's values must be explicitly recognized and be taken into account through a formalized process during deliberations on the disposal concept, the process of evaluation of the concept, and the site selection process. The purpose of this paper is: (1) to identify the importance of recognizing people's values in the process of determining 'public acceptability', (2) to outline a possible framework by which public acceptability can be gauged through a formalized process of value elicitation, and (3) to introduce a novel method by which a web-based geographic information systems (GIS) application can be used as a tool for value elicitation. In order to assess effectively the public acceptability of Canada's nuclear waste disposal concept, we submit that such a process must examine the underlying values that are held by the public. Moreover, the evaluation process of Canada's concept demonstrates that an acceptable process is a pre-condition for an acceptable result, although such a process does not necessarily guarantee an acceptable result. However, the consequences of a flawed process can be very significant, as shown by Canada's experience. This paper also provides a brief overview of a value elicitation process that, in our opinion, could be used to assess the public acceptability of the Concept. We also describe how a web-based GIS application could be used to infer some of the underlying values held by the public.

  7. Modified montmorillonite as vector for gene delivery.

    Science.gov (United States)

    Lin, Feng-Huei; Chen, Chia-Hao; Cheng, Winston T K; Kuo, Tzang-Fu

    2006-06-01

    Currently, gene delivery systems can be divided into two parts: viral or non-viral vectors. In general, viral vectors have a higher efficiency on gene delivery. However, they may sometimes provoke mutagenesis and carcinogenesis once re-activating in human body. Lots of non-viral vectors have been developed that tried to solve the problems happened on viral vectors. Unfortunately, most of non-viral vectors showed relatively lower transfection rate. The aim of this study is to develop a non-viral vector for gene delivery system. Montmorillonite (MMT) is one of clay minerals that consist of hydrated aluminum with Si-O tetrahedrons on the bottom of the layer and Al-O(OH)2 octahedrons on the top. The inter-layer space is about 12 A. The room is not enough to accommodate DNA for gene delivery. In the study, the cationic hexadecyltrimethylammonium (HDTMA) will be intercalated into the interlayer of MMT as a layer expander to expand the layer space for DNA accommodation. The optimal condition for the preparation of DNA-HDTMA-MMT is as follows: 1 mg of 1.5CEC HDTMA-MMT was prepared under pH value of 10.7 and with soaking time for 2 h. The DNA molecules can be protected from nuclease degradation, which can be proven by the electrophoresis analysis. DNA was successfully transfected into the nucleus of human dermal fibroblast and expressed enhanced green fluorescent protein (EGFP) gene with green fluorescence emission. The HDTMA-MMT has a great potential as a vector for gene delivery in the future.

  8. Perceptions Towards Non-Value-Adding Activities During The Construction Process

    Directory of Open Access Journals (Sweden)

    Ismail Haryati

    2016-01-01

    Full Text Available Non-value-adding activities are pure waste during the construction process. However, most of the construction practitioner does not realise that most of the activities performed during the construction process add no value to their project. A total of 375 numbers of questionnaires distributed to the Developer, Jabatan Kerja Raya, Consultants and Contractors. The study found that awareness by construction participants in Malaysia to take actions against non-value-adding activities during the construction process is relatively low. Through analysed by using the Pareto Chart, it has been found that defects and waiting time are two categories of non-value-adding activities that need to be prioritised by the industry. It is also found that non-value-adding activities most frequently occurred during structural and architectural work. This paper also reviewed on the causes of non-value-adding activities and discussed its effect towards time, cost, quality and productivity of the construction project. This paper is also important to give clearness and broader understandings on this form of waste other than material waste.

  9. Emerging vector borne diseases – incidence through vectors

    Directory of Open Access Journals (Sweden)

    Sara eSavic

    2014-12-01

    Full Text Available Vector borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowdays, in intercontinetal countries, there is a struggle with emerging diseases which have found their way to appear through vectors. Vector borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector borne infectious diseases and disease outbreaks. It could affect the range and popultion of pathogens, host and vectors, transmission season, etc. Reliable surveilance for diseases that are most likely to emerge is required. Canine vector borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, erlichiosis, leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fudamental role at primeraly prevention and then treatment of vector borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases.During a four year period, from 2009-2013, a total number of 551 dog samples were analysed for vector borne diseases (borreliosis, babesiosis, erlichiosis, anaplasmosis, dirofilariosis and leishmaniasis in routine laboratory work. The analysis were done by serological tests – ELISA for borreliosis, dirofilariosis and leishmaniasis, modified Knott test for dirofilariosis and blood smear for babesiosis, erlichiosis and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on avarege more then half of the samples

  10. MULTITEMPORAL CROP TYPE CLASSIFICATION USING CONDITIONAL RANDOM FIELDS AND RAPIDEYE DATA

    Directory of Open Access Journals (Sweden)

    T. Hoberg

    2012-09-01

    Full Text Available The task of crop type classification with multitemporal imagery is nowadays often done applying classifiers that are originally developed for single images like support vector machines (SVM. These approaches do not model temporal dependencies in an explicit way. Existing approaches that make use of temporal dependencies are in most cases quite simple and based on rules. Approaches that integrate temporal dependencies to statistical models are very rare and at an early stage of development. Here our approach CRFmulti, based on conditional random fields (CRF, should make a contribution. Conditional random fields consider context knowledge among neighboring primitives in the same way as Markov random fields (MRF do. Furthermore conditional random fields handle the feature vectors of the neighboring primitives and not only the class labels. Additional to taking spatial context into account, we present an approach for multitemporal data processing where a temporal association potential has been integrated to the common CRF approach to model temporal dependencies. The classification works on pixel ‐level using spectral image features, whereas all available single images are taken separately. For our experiments a high resolution RapidEye satellite data set of 2010 consisting of 4 images made during the whole vegetation period from April to October is taken. Six crop type categories are distinguished, namely grassland, corn, winter crop, rapeseed, root crops and other crops. To evaluate the potential of the new conditional random field approach the classification result is compared to a manual reference on pixel‐ and on object‐level. Additional a SVM approach is applied under the same conditions and should serve as a benchmark.

  11. Random eigenvalue problems revisited

    Indian Academy of Sciences (India)

    statistical distributions; linear stochastic systems. 1. ... dimensional multivariate Gaussian random vector with mean µ ∈ Rm and covariance ... 5, the proposed analytical methods are applied to a three degree-of-freedom system and the ...... The joint pdf ofω1 andω3 is however close to a bivariate Gaussian density function.

  12. Theta vectors and quantum theta functions

    International Nuclear Information System (INIS)

    Chang-Young, Ee; Kim, Hoil

    2005-01-01

    In this paper, we clarify the relation between Manin's quantum theta function and Schwarz's theta vector. We do this in comparison with the relation between the kq representation, which is equivalent to the classical theta function, and the corresponding coordinate space wavefunction. We first explain the equivalence relation between the classical theta function and the kq representation in which the translation operators of the phase space are commuting. When the translation operators of the phase space are not commuting, then the kq representation is no longer meaningful. We explain why Manin's quantum theta function, obtained via algebra (quantum torus) valued inner product of the theta vector, is a natural choice for the quantum version of the classical theta function. We then show that this approach holds for a more general theta vector containing an extra linear term in the exponent obtained from a holomorphic connection of constant curvature than the simple Gaussian one used in Manin's construction

  13. The ecological foundations of transmission potential and vector-borne disease in urban landscapes.

    Science.gov (United States)

    LaDeau, Shannon L; Allan, Brian F; Leisnham, Paul T; Levy, Michael Z

    2015-07-01

    Urban transmission of arthropod-vectored disease has increased in recent decades. Understanding and managing transmission potential in urban landscapes requires integration of sociological and ecological processes that regulate vector population dynamics, feeding behavior, and vector-pathogen interactions in these unique ecosystems. Vectorial capacity is a key metric for generating predictive understanding about transmission potential in systems with obligate vector transmission. This review evaluates how urban conditions, specifically habitat suitability and local temperature regimes, and the heterogeneity of urban landscapes can influence the biologically-relevant parameters that define vectorial capacity: vector density, survivorship, biting rate, extrinsic incubation period, and vector competence.Urban landscapes represent unique mosaics of habitat. Incidence of vector-borne disease in urban host populations is rarely, if ever, evenly distributed across an urban area. The persistence and quality of vector habitat can vary significantly across socio-economic boundaries to influence vector species composition and abundance, often generating socio-economically distinct gradients of transmission potential across neighborhoods.Urban regions often experience unique temperature regimes, broadly termed urban heat islands (UHI). Arthropod vectors are ectothermic organisms and their growth, survival, and behavior are highly sensitive to environmental temperatures. Vector response to UHI conditions is dependent on regional temperature profiles relative to the vector's thermal performance range. In temperate climates UHI can facilitate increased vector development rates while having countervailing influence on survival and feeding behavior. Understanding how urban heat island (UHI) conditions alter thermal and moisture constraints across the vector life cycle to influence transmission processes is an important direction for both empirical and modeling research.There remain

  14. MHD thrust vectoring of a rocket engine

    Science.gov (United States)

    Labaune, Julien; Packan, Denis; Tholin, Fabien; Chemartin, Laurent; Stillace, Thierry; Masson, Frederic

    2016-09-01

    In this work, the possibility to use MagnetoHydroDynamics (MHD) to vectorize the thrust of a solid propellant rocket engine exhaust is investigated. Using a magnetic field for vectoring offers a mass gain and a reusability advantage compared to standard gimbaled, elastomer-joint systems. Analytical and numerical models were used to evaluate the flow deviation with a 1 Tesla magnetic field inside the nozzle. The fluid flow in the resistive MHD approximation is calculated using the KRONOS code from ONERA, coupling the hypersonic CFD platform CEDRE and the electrical code SATURNE from EDF. A critical parameter of these simulations is the electrical conductivity, which was evaluated using a set of equilibrium calculations with 25 species. Two models were used: local thermodynamic equilibrium and frozen flow. In both cases, chlorine captures a large fraction of free electrons, limiting the electrical conductivity to a value inadequate for thrust vectoring applications. However, when using chlorine-free propergols with 1% in mass of alkali, an MHD thrust vectoring of several degrees was obtained.

  15. Support Vector Machine Classification of Drunk Driving Behaviour.

    Science.gov (United States)

    Chen, Huiqin; Chen, Lei

    2017-01-23

    Alcohol is the root cause of numerous traffic accidents due to its pharmacological action on the human central nervous system. This study conducted a detection process to distinguish drunk driving from normal driving under simulated driving conditions. The classification was performed by a support vector machine (SVM) classifier trained to distinguish between these two classes by integrating both driving performance and physiological measurements. In addition, principal component analysis was conducted to rank the weights of the features. The standard deviation of R-R intervals (SDNN), the root mean square value of the difference of the adjacent R-R interval series (RMSSD), low frequency (LF), high frequency (HF), the ratio of the low and high frequencies (LF/HF), and average blink duration were the highest weighted features in the study. The results show that SVM classification can successfully distinguish drunk driving from normal driving with an accuracy of 70%. The driving performance data and the physiological measurements reported by this paper combined with air-alcohol concentration could be integrated using the support vector regression classification method to establish a better early warning model, thereby improving vehicle safety.

  16. Support Vector Machine Classification of Drunk Driving Behaviour

    Directory of Open Access Journals (Sweden)

    Huiqin Chen

    2017-01-01

    Full Text Available Alcohol is the root cause of numerous traffic accidents due to its pharmacological action on the human central nervous system. This study conducted a detection process to distinguish drunk driving from normal driving under simulated driving conditions. The classification was performed by a support vector machine (SVM classifier trained to distinguish between these two classes by integrating both driving performance and physiological measurements. In addition, principal component analysis was conducted to rank the weights of the features. The standard deviation of R–R intervals (SDNN, the root mean square value of the difference of the adjacent R–R interval series (RMSSD, low frequency (LF, high frequency (HF, the ratio of the low and high frequencies (LF/HF, and average blink duration were the highest weighted features in the study. The results show that SVM classification can successfully distinguish drunk driving from normal driving with an accuracy of 70%. The driving performance data and the physiological measurements reported by this paper combined with air-alcohol concentration could be integrated using the support vector regression classification method to establish a better early warning model, thereby improving vehicle safety.

  17. Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery

    Science.gov (United States)

    Piragnolo, Marco; Masiero, Andrea; Pirotti, Francesco

    2017-04-01

    Since recent years surveying with unmanned aerial vehicles (UAV) is getting a great amount of attention due to decreasing costs, higher precision and flexibility of usage. UAVs have been applied for geomorphological investigations, forestry, precision agriculture, cultural heritage assessment and for archaeological purposes. It can be used for land use and land cover classification (LULC). In literature, there are two main types of approaches for classification of remote sensing imagery: pixel-based and object-based. On one hand, pixel-based approach mostly uses training areas to define classes and respective spectral signatures. On the other hand, object-based classification considers pixels, scale, spatial information and texture information for creating homogeneous objects. Machine learning methods have been applied successfully for classification, and their use is increasing due to the availability of faster computing capabilities. The methods learn and train the model from previous computation. Two machine learning methods which have given good results in previous investigations are Random Forest (RF) and Support Vector Machine (SVM). The goal of this work is to compare RF and SVM methods for classifying LULC using images collected with a fixed wing UAV. The processing chain regarding classification uses packages in R, an open source scripting language for data analysis, which provides all necessary algorithms. The imagery was acquired and processed in November 2015 with cameras providing information over the red, blue, green and near infrared wavelength reflectivity over a testing area in the campus of Agripolis, in Italy. Images were elaborated and ortho-rectified through Agisoft Photoscan. The ortho-rectified image is the full data set, and the test set is derived from partial sub-setting of the full data set. Different tests have been carried out, using a percentage from 2 % to 20 % of the total. Ten training sets and ten validation sets are obtained from

  18. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    Science.gov (United States)

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Soft sensor development and optimization of the commercial petrochemical plant integrating support vector regression and genetic algorithm

    Directory of Open Access Journals (Sweden)

    S.K. Lahiri

    2009-09-01

    Full Text Available Soft sensors have been widely used in the industrial process control to improve the quality of the product and assure safety in the production. The core of a soft sensor is to construct a soft sensing model. This paper introduces support vector regression (SVR, a new powerful machine learning methodbased on a statistical learning theory (SLT into soft sensor modeling and proposes a new soft sensing modeling method based on SVR. This paper presents an artificial intelligence based hybrid soft sensormodeling and optimization strategies, namely support vector regression – genetic algorithm (SVR-GA for modeling and optimization of mono ethylene glycol (MEG quality variable in a commercial glycol plant. In the SVR-GA approach, a support vector regression model is constructed for correlating the process data comprising values of operating and performance variables. Next, model inputs describing the process operating variables are optimized using genetic algorithm with a view to maximize the process performance. The SVR-GA is a new strategy for soft sensor modeling and optimization. The major advantage of the strategies is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics etc. is not required. Using SVR-GA strategy, a number of sets of optimized operating conditions were found. The optimized solutions, when verified in an actual plant, resulted in a significant improvement in the quality.

  20. Vector-vector production in photon-photon interactions

    International Nuclear Information System (INIS)

    Ronan, M.T.

    1988-01-01

    Measurements of exclusive untagged /rho/ 0 /rho/ 0 , /rho//phi/, K/sup *//bar K//sup */, and /rho/ω production and tagged /rho/ 0 /rho/ 0 production in photon-photon interactions by the TPC/Two-Gamma experiment are reviewed. Comparisons to the results of other experiments and to models of vector-vector production are made. Fits to the data following a four quark model prescription for vector meson pair production are also presented. 10 refs., 9 figs

  1. SAM: Support Vector Machine Based Active Queue Management

    International Nuclear Information System (INIS)

    Shah, M.S.

    2014-01-01

    Recent years have seen an increasing interest in the design of AQM (Active Queue Management) controllers. The purpose of these controllers is to manage the network congestion under varying loads, link delays and bandwidth. In this paper, a new AQM controller is proposed which is trained by using the SVM (Support Vector Machine) with the RBF (Radial Basis Function) kernal. The proposed controller is called the support vector based AQM (SAM) controller. The performance of the proposed controller has been compared with three conventional AQM controllers, namely the Random Early Detection, Blue and Proportional Plus Integral Controller. The preliminary simulation studies show that the performance of the proposed controller is comparable to the conventional controllers. However, the proposed controller is more efficient in controlling the queue size than the conventional controllers. (author)

  2. VALUE STREAM MAPPING AND ITS SIGNIFICANCE IN THE PRODUCTION PROCESS

    Directory of Open Access Journals (Sweden)

    Daniela Onofrejova

    2015-09-01

    Full Text Available Monitoring of flows (material, information, personal, energy, financial, etc. in the production process is always inevitable approach while searching for improvements. There are, radical improvements known as innovations, and continuous improvement established by KAIZEN principles and its useful methods. Both approaches focus on processes that add value, and minimise or eliminate those without added value. The main target of this paper is to analyse the Value stream mapping approach and its benefit to the practical world.

  3. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  4. When L1 of a vector measure is an AL-space

    OpenAIRE

    Curbera Costello, Guillermo

    1994-01-01

    We consider the space of real functions which are integrable with respect to a countably additive vector measure with values in a Banach space. In a previous paper we showed that this space can be any order continuous Banach lattice with weak order unit. We study a priori conditions on the vector measure in order to guarantee that the resulting L is order isomorphic to an AL-space. We prove that for separable measures with no atoms there exists a Co-valued measure that generates the same spac...

  5. Sharp transition between thermal and quantum tunneling regimes in magnetization relaxation processes

    Science.gov (United States)

    Tejada, J.; Zhang, X. X.; Barbara, B.

    1993-03-01

    In this paper we describe experiments involving measurements of the dependence on time of the thermoremanence magnetization of 2-dimensional random magnets. The low temperature values for the magnetic viscosity agree well with both current theories of tunneling of the magnetization vector (Chudnovsky et al.) and the work of Grabert et al. who predicted that the transition from classical to quantum regime is rather sharp for undamped systems.

  6. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Science.gov (United States)

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  7. Biased motion vector interpolation for reduced video artifacts.

    NARCIS (Netherlands)

    2011-01-01

    In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a

  8. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  9. Rare Hadronic B Decays to Vector, Axial-Vector and Tensors

    International Nuclear Information System (INIS)

    Gao, Y.Y.

    2011-01-01

    The authors review BABAR measurements of several rare B decays, including vector-axial-vector decays B ± → φK 1 ± (1270), B ± → φ K 1 ± (1400) and B ± → b 1 # -+ρ# ± , vector-vector decays B ± → φK* ± (1410), B 0 → K* 0 (bar K)* 0 , B 0 → K*0K*0 and B 0 → K*+K*-, vector-tensor decays B ± → φK* 2 (1430) ± and φK 2 (1770)/ ± (1820), and vector-scalar decays B ± → φK* 0 (1430) ± . Understanding the observed polarization pattern requires amplitude contributions from an uncertain source.

  10. Assessing invasion process through pathway and vector analysis: case of saltcedar (Tamarix spp.

    Directory of Open Access Journals (Sweden)

    Evangelina Natale

    2012-12-01

    Full Text Available Biological invasions are one of the most pervasive environmental threats to native ecosystems worldwide. The spontaneous spread ofsaltcedar is a particular threat to biodiversity conservation in arid and semiarid environments. In Argentina, three species belonging to this genus have been recognized as invaders. The aim of the present study was to identify main dispersal vectors and pathways to refine risk analysis and increase our ability to predict new areas at risk of Tamarix establishment. We surveyed and categorized 223 populations, 39% as invasive, 26% as established, 21% as contained and 14% as detected in nature. Dispersion of saltcedar was found to be associated with watercourses and human-driven disturbances; in addition roads were found to be relevant for the introduction of propagules in newenvironments. Considering the potential impact of saltcedar invasion and that it is an easily wind-dispersed invasive, it is necessary toimplement strategies to monitor dispersal pathways and take actions to eliminate invasion foci, particularly in vulnerable and highconservation value areas.

  11. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo [and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  12. Value conditioning modulates visual working memory processes.

    Science.gov (United States)

    Thomas, Paul M J; FitzGibbon, Lily; Raymond, Jane E

    2016-01-01

    Learning allows the value of motivationally salient events to become associated with stimuli that predict those events. Here, we asked whether value associations could facilitate visual working memory (WM), and whether such effects would be valence dependent. Our experiment was specifically designed to isolate value-based effects on WM from value-based effects on selective attention that might be expected to bias encoding. In a simple associative learning task, participants learned to associate the color of tinted faces with gaining or losing money or neither. Tinted faces then served as memoranda in a face identity WM task for which previously learned color associations were irrelevant and no monetary outcomes were forthcoming. Memory was best for faces with gain-associated tints, poorest for faces with loss-associated tints, and average for faces with no-outcome-associated tints. Value associated with 1 item in the WM array did not modulate memory for other items in the array. Eye movements when studying faces did not depend on the valence of previously learned color associations, arguing against value-based biases being due to differential encoding. This valence-sensitive value-conditioning effect on WM appears to result from modulation of WM maintenance processes. (c) 2015 APA, all rights reserved).

  13. Creating customer value by streamlining business processes.

    Science.gov (United States)

    Vantrappen, H

    1992-02-01

    Much of the strategic preoccupation of senior managers in the 1990s is focusing on the creation of customer value. Companies are seeking competitive advantage by streamlining the three processes through which they interact with their customers: product creation, order handling and service assurance. 'Micro-strategy' is a term which has been coined for the trade-offs and decisions on where and how to streamline these three processes. The article discusses micro-strategies applied by successful companies.

  14. Traffic and random processes an introduction

    CERN Document Server

    Mauro, Raffaele

    2015-01-01

    This book deals in a basic and systematic manner with a the fundamentals of random function theory and looks at some aspects related to arrival, vehicle headway and operational speed processes at the same time. The work serves as a useful practical and educational tool and aims at providing stimulus and motivation to investigate issues of such a strong applicative interest. It has a clearly discursive and concise structure, in which numerical examples are given to clarify the applications of the suggested theoretical model. Some statistical characterizations are fully developed in order to illustrate the peculiarities of specific modeling approaches; finally, there is a useful bibliography for in-depth thematic analysis.

  15. Neuron-specific RNA interference using lentiviral vectors

    DEFF Research Database (Denmark)

    Nielsen, Troels Tolstrup; Marion, Ingrid van; Hasholt, Lis

    2009-01-01

    BACKGROUND: Viral vectors have been used in several different settings for the delivery of small hairpin (sh) RNAs. However, most vectors have utilized ubiquitously-expressing polymerase (pol) III promoters to drive expression of the hairpin as a result of the strict requirement for precise...... transcriptional initiation and termination. Recently, pol II promoters have been used to construct vectors for RNA interference (RNAi). By embedding the shRNA into a micro RNA-context (miRNA) the endogenous miRNA processing machinery is exploited to achieve the mature synthetic miRNA (smiRNA), thereby expanding...... the possible promoter choices and eventually allowing cell type specific down-regulation of target genes. METHODS: In the present study, we constructed lentiviral vectors expressing smiRNAs under the control of pol II promoters to knockdown gene expression in cell culture and in the brain. RESULTS: We...

  16. Baryogenesis of the universe in cSMCS model plus iso-doublet vector quark

    Energy Technology Data Exchange (ETDEWEB)

    Darvishi, Neda [Faculty of Physics, University of Warsaw,Pasteura 5, 02-093 Warsaw (Poland)

    2016-11-10

    CP violation of the SM is insufficient to explain the baryon asymmetry in the universe and therefore an additional source of CP violation is needed. Here the extension of the SM by a neutral complex scalar singlet with a nonzero vacuum expectation value (cSMCS) plus a heavy vector quark pair is considered. This model offers the spontaneous CP violation and proper description in the baryogenesis, it leads strong enough first-order electro-weak phase transition to suppress the baryon-violating sphaleron process.

  17. Comparison of ANN (MLP), ANFIS, SVM, and RF models for the online classification of heating value of burning municipal solid waste in circulating fluidized bed incinerators.

    Science.gov (United States)

    You, Haihui; Ma, Zengyi; Tang, Yijun; Wang, Yuelan; Yan, Jianhua; Ni, Mingjiang; Cen, Kefa; Huang, Qunxing

    2017-10-01

    The heating values, particularly lower heating values of burning municipal solid waste are critically important parameters in operating circulating fluidized bed incineration systems. However, the heating values change widely and frequently, while there is no reliable real-time instrument to measure heating values in the process of incinerating municipal solid waste. A rapid, cost-effective, and comparative methodology was proposed to evaluate the heating values of burning MSW online based on prior knowledge, expert experience, and data-mining techniques. First, selecting the input variables of the model by analyzing the operational mechanism of circulating fluidized bed incinerators, and the corresponding heating value was classified into one of nine fuzzy expressions according to expert advice. Development of prediction models by employing four different nonlinear models was undertaken, including a multilayer perceptron neural network, a support vector machine, an adaptive neuro-fuzzy inference system, and a random forest; a series of optimization schemes were implemented simultaneously in order to improve the performance of each model. Finally, a comprehensive comparison study was carried out to evaluate the performance of the models. Results indicate that the adaptive neuro-fuzzy inference system model outperforms the other three models, with the random forest model performing second-best, and the multilayer perceptron model performing at the worst level. A model with sufficient accuracy would contribute adequately to the control of circulating fluidized bed incinerator operation and provide reliable heating value signals for an automatic combustion control system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  19. Derivatives, forms and vector fields on the κ-deformed Euclidean space

    International Nuclear Information System (INIS)

    Dimitrijevic, Marija; Moeller, Lutz; Tsouchnika, Efrossini

    2004-01-01

    The model of κ-deformed space is an interesting example of a noncommutative space, since it allows a deformed symmetry. In this paper, we present new results concerning different sets of derivatives on the coordinate algebra of κ-deformed Euclidean space. We introduce a differential calculus with two interesting sets of one-forms and higher-order forms. The transformation law of vector fields is constructed in accordance with the transformation behaviour of derivatives. The crucial property of the different derivatives, forms and vector fields is that in an n-dimensional spacetime there are always n of them. This is the key difference with respect to conventional approaches, in which the differential calculus is (n + 1)-dimensional. This work shows that derivative-valued quantities such as derivative-valued vector fields appear in a generic way on noncommutative spaces

  20. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    Science.gov (United States)

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  1. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  2. Vector fields on nonorientable surfaces

    Directory of Open Access Journals (Sweden)

    Ilie Barza

    2003-01-01

    X, and the space of vector fields on X are proved by using a symmetrisation process. An example related to the normal derivative on the border of the Möbius strip supports the nontriviality of the concepts introduced in this paper.

  3. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    Science.gov (United States)

    2016-12-01

    release; distribution is unlimited. 1. Introduction This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional...ARL-TR-7894•DEC 2016 US Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier...Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier Survivability/Lethality

  4. Prediction of soil CO2 flux in sugarcane management systems using the Random Forest approach

    Directory of Open Access Journals (Sweden)

    Rose Luiza Moraes Tavares

    Full Text Available ABSTRACT: The Random Forest algorithm is a data mining technique used for classifying attributes in order of importance to explain the variation in an attribute-target, as soil CO2 flux. This study aimed to identify prediction of soil CO2 flux variables in management systems of sugarcane through the machine-learning algorithm called Random Forest. Two different management areas of sugarcane in the state of São Paulo, Brazil, were selected: burned and green. In each area, we assembled a sampling grid with 81 georeferenced points to assess soil CO2 flux through automated portable soil gas chamber with measuring spectroscopy in the infrared during the dry season of 2011 and the rainy season of 2012. In addition, we sampled the soil to evaluate physical, chemical, and microbiological attributes. For data interpretation, we used the Random Forest algorithm, based on the combination of predicted decision trees (machine learning algorithms in which every tree depends on the values of a random vector sampled independently with the same distribution to all the trees of the forest. The results indicated that clay content in the soil was the most important attribute to explain the CO2 flux in the areas studied during the evaluated period. The use of the Random Forest algorithm originated a model with a good fit (R2 = 0.80 for predicted and observed values.

  5. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    Science.gov (United States)

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  6. An introduction to branching measure-valued processes

    CERN Document Server

    Dynkin, Eugene B

    1994-01-01

    For about half a century, two classes of stochastic processes-Gaussian processes and processes with independent increments-have played an important role in the development of stochastic analysis and its applications. During the last decade, a third class-branching measure-valued (BMV) processes-has also been the subject of much research. A common feature of all three classes is that their finite-dimensional distributions are infinitely divisible, allowing the use of the powerful analytic tool of Laplace (or Fourier) transforms. All three classes, in an infinite-dimensional setting, provide means for study of physical systems with infinitely many degrees of freedom. This is the first monograph devoted to the theory of BMV processes. Dynkin first constructs a large class of BMV processes, called superprocesses, by passing to the limit from branching particle systems. Then he proves that, under certain restrictions, a general BMV process is a superprocess. A special chapter is devoted to the connections between ...

  7. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 1

    International Nuclear Information System (INIS)

    Ernenwein, R.; Rohmer, M.M.; Benard, M.

    1990-01-01

    We present a program system for ab initio molecular orbital calculations on vector and parallel computers. The present article is devoted to the computation of one- and two-electron integrals over contracted Gaussian basis sets involving s-, p-, d- and f-type functions. The McMurchie and Davidson (MMD) algorithm has been implemented and parallelized by distributing over a limited number of logical tasks the calculation of the 55 relevant classes of integrals. All sections of the MMD algorithm have been efficiently vectorized, leading to a scalar/vector ratio of 5.8. Different algorithms are proposed and compared for an optimal vectorization of the contraction of the 'intermediate integrals' generated by the MMD formalism. Advantage is taken of the dynamic storage allocation for tuning the length of the vector loops (i.e. the size of the vectorization buffer) as a function of (i) the total memory available for the job, (ii) the number of logical tasks defined by the user (≤13), and (iii) the storage requested by each specific class of integrals. Test calculations carried out on a CRAY-2 computer show that the average number of finite integrals computed over a (s, p, d, f) CGTO basis set is about 1180000 per second and per processor. The combination of vectorization and parallelism on this 4-processor machine reduces the CPU time by a factor larger than 20 with respect to the scalar and sequential performance. (orig.)

  8. Quantum correlations and dynamics from classical random fields valued in complex Hilbert spaces

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2010-01-01

    One of the crucial differences between mathematical models of classical and quantum mechanics (QM) is the use of the tensor product of the state spaces of subsystems as the state space of the corresponding composite system. (To describe an ensemble of classical composite systems, one uses random variables taking values in the Cartesian product of the state spaces of subsystems.) We show that, nevertheless, it is possible to establish a natural correspondence between the classical and the quantum probabilistic descriptions of composite systems. Quantum averages for composite systems (including entangled) can be represented as averages with respect to classical random fields. It is essentially what Albert Einstein dreamed of. QM is represented as classical statistical mechanics with infinite-dimensional phase space. While the mathematical construction is completely rigorous, its physical interpretation is a complicated problem. We present the basic physical interpretation of prequantum classical statistical field theory in Sec. II. However, this is only the first step toward real physical theory.

  9. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  10. Measurement of Charmless B to Vector-Vector decays at BaBar

    International Nuclear Information System (INIS)

    Olaiya, Emmanuel

    2011-01-01

    The authors present results of B → vector-vector (VV) and B → vector-axial vector (VA) decays B 0 → φX(X = φ,ρ + or ρ 0 ), B + → φK (*)+ , B 0 → K*K*, B 0 → ρ + b 1 - and B + → K* 0 α 1 + . The largest dataset used for these results is based on 465 x 10 6 Υ(4S) → B(bar B) decays, collected with the BABAR detector at the PEP-II B meson factory located at the Stanford Linear Accelerator Center (SLAC). Using larger datasets, the BABAR experiment has provided more precise B → VV measurements, further supporting the smaller than expected longitudinal polarization fraction of B → φK*. Additional B meson to vector-vector and vector-axial vector decays have also been studied with a view to shedding light on the polarization anomaly. Taking into account the available errors, we find no disagreement between theory and experiment for these additional decays.

  11. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  12. Vector optical fields with bipolar symmetry of linear polarization.

    Science.gov (United States)

    Pan, Yue; Li, Yongnan; Li, Si-Min; Ren, Zhi-Cheng; Si, Yu; Tu, Chenghou; Wang, Hui-Tian

    2013-09-15

    We focus on a new kind of vector optical field with bipolar symmetry of linear polarization instead of cylindrical and elliptical symmetries, enriching members of family of vector optical fields. We design theoretically and generate experimentally the demanded vector optical fields and then explore some novel tightly focusing properties. The geometric configurations of states of polarization provide additional degrees of freedom assisting in engineering the field distribution at the focus to the specific applications such as lithography, optical trapping, and material processing.

  13. Vectorization of DOT3.5 code

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Ishiguro, Misako; Tsutsui, Tsuneo

    1990-07-01

    In this report, we describe the vectorization of two-dimensional Sn-method radiation transport code DOT3.5. Vectorized codes are not only the NEA original version developed at ORNL but also the versions improved by JAERI: DOT3.5 FNS version for fusion neutronics analyses, DOT3.5 FER version for fusion reactor design, and ESPRIT module of RADHEAT-V4 code system for radiation shielding and radiation transport analyses. In DOT3.5, input/output processing time amounts to a great part of the elapsed time when a large number of energy groups and/or a large number of spatial mesh points are used in the calculated problem. Therefore, an improvement has been made for the speedup of input/output processing in the DOT3.5 FNS version, and DOT-DD (Double Differential cross section) code. The total speedup ratio of vectorized version to the original scalar one is 1.7∼1.9 for DOT3.5 NEA version, 2.2∼2.3 fro DOT3.5 FNS version, 1.7 for DOT3.5 FER version, and 3.1∼4.4 for RADHEAT-V4, respectively. The elapsed times for improved DOT3.5 FNS version and DOT-DD are reduced to 50∼65% that of the original version by the input/output speedup. In this report, we describe summary of codes, the techniques used for the vectorization and input/output speedup, verification of computed results, and speedup effect. (author)

  14. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  15. Introducing two Random Forest based methods for cloud detection in remote sensing images

    Science.gov (United States)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The

  16. On the extreme value statistics of normal random matrices and 2D Coulomb gases: Universality and finite N corrections

    Science.gov (United States)

    Ebrahimi, R.; Zohren, S.

    2018-03-01

    In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.

  17. Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature

    International Nuclear Information System (INIS)

    Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.

    2009-01-01

    We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), and (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.

  18. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  19. A design of a computer complex including vector processors

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1982-12-01

    We, members of the Computing Center, Japan Atomic Energy Research Institute have been engaged for these six years in the research of adaptability of vector processing to large-scale nuclear codes. The research has been done in collaboration with researchers and engineers of JAERI and a computer manufacturer. In this research, forty large-scale nuclear codes were investigated from the viewpoint of vectorization. Among them, twenty-six codes were actually vectorized and executed. As the results of the investigation, it is now estimated that about seventy percents of nuclear codes and seventy percents of our total amount of CPU time of JAERI are highly vectorizable. Based on the data obtained by the investigation, (1)currently vectorizable CPU time, (2)necessary number of vector processors, (3)necessary manpower for vectorization of nuclear codes, (4)computing speed, memory size, number of parallel 1/0 paths, size and speed of 1/0 buffer of vector processor suitable for our applications, (5)necessary software and operational policy for use of vector processors are discussed, and finally (6)a computer complex including vector processors is presented in this report. (author)

  20. Efficient tests for equivalence of hidden Markov processes and quantum random walks

    NARCIS (Netherlands)

    U. Faigle; A. Schönhuth (Alexander)

    2011-01-01

    htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations

  1. Renormalization of the axial-vector current in QCD

    International Nuclear Information System (INIS)

    Chiu, C.B.; Pasupathy, J.; Wilson, S.L.

    1985-01-01

    Following the method of Ioffe and Smilga, the propagation of the baryon current in an external constant axial-vector field is considered. The close similarity of the operator-product expansion with and without an external field is shown to arise from the chiral invariance of gauge interactions in perturbation theory. Several sum rules corresponding to various invariants both for the nucleon and the hyperons are derived. The analysis of the sum rules is carried out by two independent methods, one called the ratio method and the other called the continuum method, paying special attention to the nondiagonal transitions induced by the external field between the ground state and excited states. Up to operators of dimension six, two new external-field-induced vacuum expectation values enter the calculations. Previous work determining these expectation values from PCAC (partial conservation of axial-vector current) are utilized. Our determination from the sum rules of the nucleon axial-vector renormalization constant G/sub A/, as well as the Cabibbo coupling constants in the SU 3 -symmetric limit (m/sub s/ = 0), is in reasonable accord with the experimental values. Uncertainties in the analysis are pointed out. The case of broken flavor SU 3 symmetry is also considered. While in the ratio method, the results are stable for variation of the fiducial interval of the Borel mass parameter over which the left-hand side and the right-hand side of the sum rules are matched, in the continuum method the results are less stable. Another set of sum rules determines the value of the linear combination 7F-5D to be roughly-equal0, or D/(F+D)roughly-equal(7/12). .AE

  2. Increased certification of semi-device independent random numbers using many inputs and more post-processing

    International Nuclear Information System (INIS)

    Mironowicz, Piotr; Tavakoli, Armin; Hameedi, Alley; Marques, Breno; Bourennane, Mohamed; Pawłowski, Marcin

    2016-01-01

    Quantum communication with systems of dimension larger than two provides advantages in information processing tasks. Examples include higher rates of key distribution and random number generation. The main disadvantage of using such multi-dimensional quantum systems is the increased complexity of the experimental setup. Here, we analyze a not-so-obvious problem: the relation between randomness certification and computational requirements of the post-processing of experimental data. In particular, we consider semi-device independent randomness certification from an experiment using a four dimensional quantum system to violate the classical bound of a random access code. Using state-of-the-art techniques, a smaller quantum violation requires more computational power to demonstrate randomness, which at some point becomes impossible with today’s computers although the randomness is (probably) still there. We show that by dedicating more input settings of the experiment to randomness certification, then by more computational postprocessing of the experimental data which corresponds to a quantum violation, one may increase the amount of certified randomness. Furthermore, we introduce a method that significantly lowers the computational complexity of randomness certification. Our results show how more randomness can be generated without altering the hardware and indicate a path for future semi-device independent protocols to follow. (paper)

  3. The value of random biopsies, omentectomy, and hysterectomy in operations for borderline ovarian tumors

    DEFF Research Database (Denmark)

    Kristensen, Gitte Schultz; Schledermann, Doris; Mogensen, Ole

    2014-01-01

    OBJECTIVE: Borderline ovarian tumors (BOTs) are treated surgically like malignant ovarian tumors with hysterectomy, salpingectomy, omentectomy, and multiple random peritoneal biopsies in addition to removal of the ovaries. It is, however, unknown how often removal of macroscopically normal......-appearing tissues leads to the finding of microscopic disease. To evaluate the value of random biopsies, omentectomy, and hysterectomy in operations for BOT, the macroscopic and microscopic findings in a cohort of these patients were reviewed retrospectively. MATERIALS: Women treated for BOT at Odense University.......7%) in International Federation of Gynecology and Obstetrics stage I, 9 (12%) in stage II, and 7 (9.3%) in stage III. The histologic subtypes were serous (68%), mucinous (30.7%), and Brenner type (1.3%). Macroscopically radical surgery was performed in 62 patients (82.7%), and 46 (61.3%) received complete staging...

  4. Value Stream Mapping: Foam Collection and Processing.

    Energy Technology Data Exchange (ETDEWEB)

    Sorensen, Christian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    The effort to collect and process foam for the purpose of recycling performed by the Material Sustainability and Pollution Prevention (MSP2) team at Sandia National Laboratories is an incredible one, but in order to make it run more efficiently it needed some tweaking. This project started in June of 2015. We used the Value Stream Mapping process to allow us to look at the current state of the foam collection and processing operation. We then thought of all the possible ways the process could be improved. Soon after that we discussed which of the "dreams" were feasible. And finally, we assigned action items to members of the team so as to ensure that the improvements actually occur. These improvements will then, due to varying factors, continue to occur over the next couple years.

  5. Recent advances in genetic modification of adenovirus vectors for cancer treatment.

    Science.gov (United States)

    Yamamoto, Yuki; Nagasato, Masaki; Yoshida, Teruhiko; Aoki, Kazunori

    2017-05-01

    Adenoviruses are widely used to deliver genes to a variety of cell types and have been used in a number of clinical trials for gene therapy and oncolytic virotherapy. However, several concerns must be addressed for the clinical use of adenovirus vectors. Selective delivery of a therapeutic gene by adenovirus vectors to target cancer is precluded by the widespread distribution of the primary cellular receptors. The systemic administration of adenoviruses results in hepatic tropism independent of the primary receptors. Adenoviruses induce strong innate and acquired immunity in vivo. Furthermore, several modifications to these vectors are necessary to enhance their oncolytic activity and ensure patient safety. As such, the adenovirus genome has been engineered to overcome these problems. The first part of the present review outlines recent progress in the genetic modification of adenovirus vectors for cancer treatment. In addition, several groups have recently developed cancer-targeting adenovirus vectors by using libraries that display random peptides on a fiber knob. Pancreatic cancer-targeting sequences have been isolated, and these oncolytic vectors have been shown by our group to be associated with a higher gene transduction efficiency and more potent oncolytic activity in cell lines, murine models, and surgical specimens of pancreatic cancer. In the second part of this review, we explain that combining cancer-targeting strategies can be a promising approach to increase the clinical usefulness of oncolytic adenovirus vectors. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  6. A Randomized Longitudinal Factorial Design to Assess Malaria Vector Control and Disease Management Interventions in Rural Tanzania

    Directory of Open Access Journals (Sweden)

    Randall A. Kramer

    2014-05-01

    Full Text Available The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1 a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2 vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding. The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials.

  7. A randomized longitudinal factorial design to assess malaria vector control and disease management interventions in rural Tanzania.

    Science.gov (United States)

    Kramer, Randall A; Mboera, Leonard E G; Senkoro, Kesheni; Lesser, Adriane; Shayo, Elizabeth H; Paul, Christopher J; Miranda, Marie Lynn

    2014-05-16

    The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1) a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2) vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding). The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials.

  8. Vectorization of nuclear codes on FACOM 230-75 APU computer

    International Nuclear Information System (INIS)

    Harada, Hiroo; Higuchi, Kenji; Ishiguro, Misako; Tsutsui, Tsuneo; Fujii, Minoru

    1983-02-01

    To provide for the future usage of supercomputer, we have investigated the vector processing efficiency of nuclear codes which are being used at JAERI. The investigation is performed by using FACOM 230-75 APU computer. The codes are CITATION (3D neutron diffusion), SAP5 (structural analysis), CASCMARL (irradiation damage simulation). FEM-BABEL (3D neutron diffusion by FEM), GMSCOPE (microscope simulation). DWBA (cross section calculation at molecular collisions). A new type of cell density calculation for particle-in-cell method is also investigated. For each code we have obtained a significant speedup which ranges from 1.8 (CASCMARL) to 7.5 (GMSCOPE), respectively. We have described in this report the running time dynamic profile analysis of the codes, numerical algorithms used, program restructuring for the vectorization, numerical experiments of the iterative process, vectorized ratios, speedup ratios on the FACOM 230-75 APU computer, and some vectorization views. (author)

  9. Modeling a ground-coupled heat pump system by a support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-08-15

    This paper reports on a modeling study of ground coupled heat pump (GCHP) system performance (COP) by using a support vector machine (SVM) method. A GCHP system is a multi-variable system that is hard to model by conventional methods. As regards the SVM, it has a superior capability for generalization, and this capability is independent of the dimensionality of the input data. In this study, a SVM based method was intended to adopt GCHP system for efficient modeling. The Lin-kernel SVM method was quite efficient in modeling purposes and did not require a pre-knowledge about the system. The performance of the proposed methodology was evaluated by using several statistical validation parameters. It is found that the root-mean squared (RMS) value is 0.002722, the coefficient of multiple determinations (R{sup 2}) value is 0.999999, coefficient of variation (cov) value is 0.077295, and mean error function (MEF) value is 0.507437 for the proposed Lin-kernel SVM method. The optimum parameters of the SVM method were determined by using a greedy search algorithm. This search algorithm was effective for obtaining the optimum parameters. The simulation results show that the SVM is a good method for prediction of the COP of the GCHP system. The computation of SVM model is faster compared with other machine learning techniques (artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS)); because there are fewer free parameters and only support vectors (only a fraction of all data) are used in the generalization process. (author)

  10. Canon, Value, and Cultural Heritage: New Processes of Assigning Value in the Postdigital Realm

    Directory of Open Access Journals (Sweden)

    Nuria Rodríguez-Ortega

    2018-05-01

    Full Text Available The range of modes through which the new conditions of the postdigital society is leading to a redefinition of the processes of assigning value, and the values themselves, which have hitherto prevailed in the comprehension of cultural heritage, are diverse and broad. Within this framework of critical inquiry, this paper discusses the mechanisms of canon-formation in the context of the web as the new laboratory of cultural production. It is argued that the main dynamics observed can be elucidated under the form of a triad: hypercanonization, socialdecanonization, and transcanonization. These three processes operate simultaneously interlaced and unfold in dialectical tension between the rise of the new (practices, actors, values, ideas, and the maintenance of the old (those structures that already exist. This paper delves into the paths through which such interlace dynamics and tension might reshape the principles by which canonicity develops, as well as poses open questions about the challenges facing us, which should be discussed in further studies and approaches to the problem.

  11. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  12. An Effective NoSQL-Based Vector Map Tile Management Approach

    Directory of Open Access Journals (Sweden)

    Lin Wan

    2016-11-01

    Full Text Available Within a digital map service environment, the rapid growth of Spatial Big-Data is driving new requirements for effective mechanisms for massive online vector map tile processing. The emergence of Not Only SQL (NoSQL databases has resulted in a new data storage and management model for scalable spatial data deployments and fast tracking. They better suit the scenario of high-volume, low-latency network map services than traditional standalone high-performance computer (HPC or relational databases. In this paper, we propose a flexible storage framework that provides feasible methods for tiled map data parallel clipping and retrieval operations within a distributed NoSQL database environment. We illustrate the parallel vector tile generation and querying algorithms with the MapReduce programming model. Three different processing approaches, including local caching, distributed file storage, and the NoSQL-based method, are compared by analyzing the concurrent load and calculation time. An online geological vector tile map service prototype was developed to embed our processing framework in the China Geological Survey Information Grid. Experimental results show that our NoSQL-based parallel tile management framework can support applications that process huge volumes of vector tile data and improve performance of the tiled map service.

  13. The Added Value of the Project Selection Process

    Directory of Open Access Journals (Sweden)

    Adel Oueslati

    2016-06-01

    Full Text Available The project selection process comes in the first stage of the overall project management life cycle. It does have a very important impact on organization success. The present paper provides defi nitions of the basic concepts and tools related to the project selection process. It aims to stress the added value of this process for the entire organization success. The mastery of the project selection process is the right way for any organization to ensure that it will do the right project with the right resources at the right time and within the right priorities

  14. More on random-lattice fermions

    International Nuclear Information System (INIS)

    Kieu, T.D.; Institute for Advanced Study, Princeton, NJ; Markham, J.F.; Paranavitane, C.B.

    1995-01-01

    The lattice fermion determinants, in a given background gauge field, are evaluated for two different kinds of random lattices and compared to those of naive and wilson fermions in the continuum limit. While the fermion doubling is confirmed on one kind of lattices, there is positive evidence that it may be absent for the other, at least for vector interactions in two dimensions. Combined with previous studies, arbitrary randomness by itself is shown to be not a sufficient condition to remove the fermion doublers. 8 refs., 3 figs

  15. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  16. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Science.gov (United States)

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  17. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Hasan Abdulameer

    2014-01-01

    Full Text Available Existing face recognition methods utilize particle swarm optimizer (PSO and opposition based particle swarm optimizer (OPSO to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM. In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.

  18. Muon reconstruction and p p → 2μ4j vector boson fusion process at CMS

    International Nuclear Information System (INIS)

    Bellan, R.

    2009-01-01

    The work presented in this paper has been done within the Compact Muon Solenoid (CMS) Collaboration, one of the four experimental communities present at Lhc, and covers the description and the performance studies of the muon reconstruction and simulation algorithms. More specifically, the simulation of the drift tube cell, the muon reconstruction within the Drift Tube chamber, the track reconstruction and muon identification with the whole CMS tracking system, are here discussed. These algorithms have been developed in order to obtain a high resolution on the Z → μ + μ - observables, because the presence of the Z particle in the final state is one of the important signatures of the p p → μ + μ - jjjj vector boson scattering channel. A study of the p p → μ + μ - jjjj process has been performed in order to assess the possibility of probing the symmetry-breaking mechanism through the vector boson scattering using the CMS detector, with no assumption on the mechanism which restores the unitarity. The analysis strategy is shown here. The results in this paper have been extracted from the author's PhD thesis. (See CERN-Thesis-2009-139 and CMS T S 2008/021 (2007).)

  19. A Novel Approach to Asynchronous MVP Data Interpretation Based on Elliptical-Vectors

    Science.gov (United States)

    Kruglyakov, M.; Trofimov, I.; Korotaev, S.; Shneyer, V.; Popova, I.; Orekhova, D.; Scshors, Y.; Zhdanov, M. S.

    2014-12-01

    We suggest a novel approach to asynchronous magnetic-variation profiling (MVP) data interpretation. Standard method in MVP is based on the interpretation of the coefficients of linear relation between vertical and horizontal components of the measured magnetic field.From mathematical point of view this pair of linear coefficients is not a vector which leads to significant difficulties in asynchronous data interpretation. Our approach allows us to actually treat such a pair of complex numbers as a special vector called an ellipse-vector (EV). By choosing the particular definitions of complex length and direction, the basic relation of MVP can be considered as the dot product. This considerably simplifies the interpretation of asynchronous data. The EV is described by four real numbers: the values of major and minor semiaxes, the angular direction of the major semiaxis and the phase. The notation choice is motivated by historical reasons. It is important that different EV's components have different sensitivity with respect to the field sources and the local heterogeneities. Namely, the value of major semiaxis and the angular direction are mostly determined by the field source and the normal cross-section. On the other hand, the value of minor semiaxis and the phase are responsive to local heterogeneities. Since the EV is the general form of complex vector, the traditional Schmucker vectors can be explicitly expressed through its components.The proposed approach was successfully applied to interpretation the results of asynchronous measurements that had been obtained in the Arctic Ocean at the drift stations "North Pole" in 1962-1976.

  20. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  1. Generalized vector calculus on convex domain

    Science.gov (United States)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  2. Vector boson production at hadron colliders: transverse-momentum resummation and leptonic decay

    Energy Technology Data Exchange (ETDEWEB)

    Catani, Stefano [INFN - Sezione di Firenze and Dipartimento di Fisica e Astronomia,Università di Firenze, I-50019 Sesto Fiorentino, Florence (Italy); Florian, Daniel de [Departamento de Física, FCEYN, Universidad de Buenos Aires (1428) Pabellón 1 Ciudad Universitaria, Capital Federal (Argentina); International Center for Advanced Studies (ICAS), UNSAM,Campus Miguelete, 25 de Mayo y Francia, 1650 Buenos Aires (Argentina); Ferrera, Giancarlo [Dipartimento di Fisica, Università di Milano and INFN - Sezione di Milano,I-20133 Milan (Italy); Grazzini, Massimiliano [Physik-Institut, Universität Zürich,CH-8057 Zürich (Switzerland)

    2015-12-09

    We consider the transverse-momentum (q{sub T}) distribution of Drell-Yan lepton pairs produced, via W and Z/γ{sup ∗} decay, in hadronic collisions. At small values of q{sub T}, we resum the logarithmically-enhanced perturbative QCD contributions up to next-to-next-to-leading logarithmic accuracy. Resummed results are consistently combined with the known O(α{sub S}{sup 2}) fixed-order results at intermediate and large values of q{sub T}. Our calculation includes the leptonic decay of the vector boson with the corresponding spin correlations, the finite-width effects and the full dependence on the final-state lepton(s) kinematics. The computation is encoded in the numerical program DYRes, which allows the user to apply arbitrary kinematical cuts on the final-state leptons and to compute the corresponding distributions in the form of bin histograms. We present a comparison of our results with some of the available LHC data. The inclusion of the leptonic decay in the resummed calculation requires a theoretical discussion on the q{sub T} recoil due to the transverse momentum of the produced vector boson. We present a q{sub T} recoil procedure that is directly applicable to q{sub T} resummed calculations for generic production processes of high-mass systems in hadron collisions.

  3. Measurement of guided mode wave vectors by analysis of the transfer matrix obtained with multi-emitters and multi-receivers in contact

    Energy Technology Data Exchange (ETDEWEB)

    Minonzio, Jean-Gabriel; Talmant, Maryline; Laugier, Pascal, E-mail: jean-gabriel.minonzio@upmc.fr [UPMC Univ Paris 06, UMR 7623, LIP, 15 rue de l' ecole de medecine F-75005, Paris (France)

    2011-01-01

    Different quantitative ultrasound techniques are currently developed for clinical assessment of human bone status. This paper is dedicated to axial transmission: emitters and receivers are linearly arranged on the same side of the skeletal site, preferentially the forearm. In several clinical studies, the signal velocity of the earliest temporal event has been shown to discriminate osteoporotic patients from healthy subjects. However, a multi parameter approach might be relevant to improve bone diagnosis and this be could be achieved by accurate measurement of guided waves wave vectors. For clinical purposes and easy access to the measurement site, the length probe is limited to about 10 mm. The limited number of acquisition scan points on such a short distance reduces the efficiency of conventional signal processing techniques, such as spatio-temporal Fourier transform. The performance of time-frequency techniques was shown to be moderate in other studies. Thus, optimised signal processing is a critical point for a reliable estimate of guided mode wave vectors. Toward this end, a technique, taking benefit of using both multiple emitters and multiple receivers, is proposed. The guided mode wave vectors are obtained using a projection in the singular vectors basis. Those are determined by the singular values decomposition of the transmission matrix between the two arrays at different frequencies. This technique enables us to recover accurately guided waves wave vectors for moderately large array.

  4. Retro-Techno-Economic Analysis: Using (Bio)Process Systems Engineering Tools to Attain Process Target Values

    DEFF Research Database (Denmark)

    Furlan, Felipe F.; Costa, Caliane B B; Secchi, Argimiro R.

    2016-01-01

    Economic analysis, allied to process systems engineering tools, can provide useful insights about process techno-economic feasibility. More interestingly, rather than being used to evaluate specific process conditions, this techno-economic analysis can be turned upside down to achieve target valu...

  5. Biting behaviour of African malaria vectors: 1. where do the main vector species bite on the human body?

    Science.gov (United States)

    Braack, Leo; Hunt, Richard; Koekemoer, Lizette L; Gericke, Anton; Munhenga, Givemore; Haddow, Andrew D; Becker, Piet; Okia, Michael; Kimera, Isaac; Coetzee, Maureen

    2015-02-04

    Malaria control in Africa relies heavily on indoor vector management, primarily indoor residual spraying and insecticide treated bed nets. Little is known about outdoor biting behaviour or even the dynamics of indoor biting and infection risk of sleeping household occupants. In this paper we explore the preferred biting sites on the human body and some of the ramifications regarding infection risk and exposure management. We undertook whole-night human landing catches of Anopheles arabiensis in South Africa and Anopheles gambiae s.s. and Anopheles funestus in Uganda, for seated persons wearing short sleeve shirts, short pants, and bare legs, ankles and feet. Catches were kept separate for different body regions and capture sessions. All An. gambiae s.l. and An. funestus group individuals were identified to species level by PCR. Three of the main vectors of malaria in Africa (An. arabiensis, An. gambiae s.s. and An. funestus) all have a preference for feeding close to ground level, which is manifested as a strong propensity (77.3% - 100%) for biting on lower leg, ankles and feet of people seated either indoors or outdoors, but somewhat randomly along the lower edge of the body in contact with the surface when lying down. If the lower extremities of the legs (below mid-calf level) of seated people are protected and therefore exclude access to this body region, vector mosquitoes do not move higher up the body to feed at alternate body sites, instead resulting in a high (58.5% - 68.8%) reduction in biting intensity by these three species. Protecting the lower limbs of people outdoors at night can achieve a major reduction in biting intensity by malaria vector mosquitoes. Persons sleeping at floor level bear a disproportionate risk of being bitten at night because this is the preferred height for feeding by the primary vector species. Therefore it is critical to protect children sleeping at floor level (bednets; repellent-impregnated blankets or sheets, etc

  6. Versatile generation of optical vector fields and vector beams using a non-interferometric approach.

    Science.gov (United States)

    Tripathi, Santosh; Toussaint, Kimani C

    2012-05-07

    We present a versatile, non-interferometric method for generating vector fields and vector beams which can produce all the states of polarization represented on a higher-order Poincaré sphere. The versatility and non-interferometric nature of this method is expected to enable exploration of various exotic properties of vector fields and vector beams. To illustrate this, we study the propagation properties of some vector fields and find that, in general, propagation alters both their intensity and polarization distribution, and more interestingly, converts some vector fields into vector beams. In the article, we also suggest a modified Jones vector formalism to represent vector fields and vector beams.

  7. Automated identification of insect vectors of Chagas disease in Brazil and Mexico: the Virtual Vector Lab

    Directory of Open Access Journals (Sweden)

    Rodrigo Gurgel-Gonçalves

    2017-04-01

    Full Text Available Identification of arthropods important in disease transmission is a crucial, yet difficult, task that can demand considerable training and experience. An important case in point is that of the 150+ species of Triatominae, vectors of Trypanosoma cruzi, causative agent of Chagas disease across the Americas. We present a fully automated system that is able to identify triatomine bugs from Mexico and Brazil with an accuracy consistently above 80%, and with considerable potential for further improvement. The system processes digital photographs from a photo apparatus into landmarks, and uses ratios of measurements among those landmarks, as well as (in a preliminary exploration two measurements that approximate aspects of coloration, as the basis for classification. This project has thus produced a working prototype that achieves reasonably robust correct identification rates, although many more developments can and will be added, and—more broadly—the project illustrates the value of multidisciplinary collaborations in resolving difficult and complex challenges.

  8. A New Waveform Mosaic Algorithm in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-11-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In this paper, a new waveform mosaic algorithm in the vectorization of paper seismograms is presented. We also give out the technological process to waveform mosaic, and a waveform mosaic system used to vectorize analog seismic record has been accomplished independently. Using it, we can precisely and speedy accomplish waveform mosaic for vectorizing analog seismic records.

  9. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  10. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  11. Improved Coinfection with Amphotropic Pseudotyped Retroviral Vectors

    Directory of Open Access Journals (Sweden)

    Yuehong Wu

    2009-01-01

    Full Text Available Amphotropic pseudotyped retroviral vectors have typically been used to infect target cells without prior concentration. Although this can yield high rates of infection, higher rates may be needed where highly efficient coinfection of two or more vectors is needed. In this investigation we used amphotropic retroviral vectors produced by the Plat-A cell line and studied coinfection rates using green and red fluorescent proteins (EGFP and dsRed2. Target cells were primary human fibroblasts (PHF and 3T3 cells. Unconcentrated vector preparations produced a coinfection rate of ∼4% (defined as cells that are both red and green as a percentage of all cells infected. Optimized spinoculation, comprising centrifugation at 1200 g for 2 hours at 15∘C, increased the coinfection rate to ∼10%. Concentration by centrifugation at 10,000 g or by flocculation using Polybrene increased the coinfection rate to ∼25%. Combining the two processes, concentration by Polybrene flocculation and optimized spinoculation, increased the coinfection rate to 35% (3T3 or >50% (PHF. Improved coinfection should be valuable in protocols that require high transduction by combinations of two or more retroviral vectors.

  12. A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process

    NARCIS (Netherlands)

    C.M. Hafner (Christian); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of

  13. Calculation of excited vector meson electron widths using QCD sum rules

    International Nuclear Information System (INIS)

    Geshkenbein, B.V.

    1984-01-01

    The sum rules are suggested which allow one to calculate the electron widths of excited vector mesons of the PSI, UPSILON, rho meson family assuming the values of their masses to be known. The calculated values of the electron widths agree with experiment

  14. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    Directory of Open Access Journals (Sweden)

    Ibrahim Baz

    2008-04-01

    Full Text Available This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction, for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD, indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  15. Random practice - one of the factors of the motor learning process

    Directory of Open Access Journals (Sweden)

    Petr Valach

    2012-01-01

    Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.

  16. Continuous Spatial Process Models for Spatial Extreme Values

    KAUST Repository

    Sang, Huiyan; Gelfand, Alan E.

    2010-01-01

    process model for extreme values that provides mean square continuous realizations, where the behavior of the surface is driven by the spatial dependence which is unexplained under the latent spatio-temporal specification for the GEV parameters

  17. Evidence of significant bias in an elementary random number generator

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Brandl, V.

    1981-03-01

    An elementary pseudo random number generator for isotropically distributed unit vectors in 3-dimensional space has ben tested for bias. This generator uses the IBM-suplied routine RANDU and a transparent rejection technique. The tests show clearly that non-randomness in the pseudo random numbers generated by the primary IBM generator leads to bias in the order of 1 percent in estimates obtained from the secondary random number generator. FORTRAN listings of 4 variants of the random number generator called by a simple test programme and output listings are included for direct reference. (orig.) [de

  18. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  19. Vector boson plus one jet production in POWHEG

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); INFN, Sezione Milano-Bicocca, Milan (Italy); Nason, Paolo [INFN, Sezione Milano-Bicocca, Milan (Italy); Oleari, Carlo [Milano-Bicocca Univ. (Italy); INFN, Sezione Milano-Bicocca, Milan (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; INFN, Sezione Milano-Bicocca, Milan (Italy)

    2010-09-15

    We present an implementation of the next-to-leading order vector boson plus one jet production process in hadronic collision in the framework of POWHEG, which is a method to implement NLO calculations within a Shower Monte Carlo context. All spin correlations in the vector boson decay products have been taken into account. The process has been implemented in the framework of the POWHEG BOX, an automated computer code for turning a NLO calculation into a shower Monte Carlo program. We present phenomenological results for the case of the Z/{gamma} plus one jet production process, obtained by matching the POWHEG calculation with the shower performed by PYTHIA, for the LHC, and we compare our results with available Tevatron data. (orig.)

  20. Vector boson plus one jet production in POWHEG

    International Nuclear Information System (INIS)

    Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele

    2010-09-01

    We present an implementation of the next-to-leading order vector boson plus one jet production process in hadronic collision in the framework of POWHEG, which is a method to implement NLO calculations within a Shower Monte Carlo context. All spin correlations in the vector boson decay products have been taken into account. The process has been implemented in the framework of the POWHEG BOX, an automated computer code for turning a NLO calculation into a shower Monte Carlo program. We present phenomenological results for the case of the Z/γ plus one jet production process, obtained by matching the POWHEG calculation with the shower performed by PYTHIA, for the LHC, and we compare our results with available Tevatron data. (orig.)

  1. Predicting Solar Flares Using SDO /HMI Vector Magnetic Data Products and the Random Forest Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang; Deng, Na; Wang, Haimin [Space Weather Research Laboratory, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States); Wang, Jason T. L., E-mail: chang.liu@njit.edu, E-mail: na.deng@njit.edu, E-mail: haimin.wang@njit.edu, E-mail: jason.t.wang@njit.edu [Department of Computer Science, New Jersey Institute of Technology, University Heights, Newark, NJ 07102-1982 (United States)

    2017-07-10

    Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.

  2. Isoscalar compression modes in relativistic random phase approximation

    International Nuclear Information System (INIS)

    Ma, Zhong-yu; Van Giai, Nguyen.; Wandelt, A.; Vretenar, D.; Ring, P.

    2001-01-01

    Monopole and dipole compression modes in nuclei are analyzed in the framework of a fully consistent relativistic random phase approximation (RRPA), based on effective mean-field Lagrangians with nonlinear meson self-interaction terms. The large effect of Dirac sea states on isoscalar strength distribution functions is illustrated for the monopole mode. The main contribution of Fermi and Dirac sea pair states arises through the exchange of the scalar meson. The effect of vector meson exchange is much smaller. For the monopole mode, RRPA results are compared with constrained relativistic mean-field calculations. A comparison between experimental and calculated energies of isoscalar giant monopole resonances points to a value of 250-270 MeV for the nuclear matter incompressibility. A large discrepancy remains between theoretical predictions and experimental data for the dipole compression mode

  3. Bioelectrical impedance vector distribution in the first year of life.

    Science.gov (United States)

    Savino, Francesco; Grasso, Giulia; Cresi, Francesco; Oggero, Roberto; Silvestro, Leandra

    2003-06-01

    We assessed the bioelectrical impedance vector distribution in a sample of healthy infants in the first year of life, which is not available in literature. The study was conducted as a cross-sectional study in 153 healthy Caucasian infants (90 male and 63 female) younger than 1 y, born at full term, adequate for gestational age, free from chronic diseases or growth problems, and not feverish. Z scores for weight, length, cranial circumference, and body mass index for the study population were within the range of +/-1.5 standard deviations according to the Euro-Growth Study references. Concurrent anthropometrics (weight, length, and cranial circumference), body mass index, and bioelectrical impedance (resistance and reactance) measurements were made by the same operator. Whole-body (hand to foot) tetrapolar measurements were performed with a single-frequency (50 kHz), phase-sensitive impedance analyzer. The study population was subdivided into three classes of age for statistical analysis: 0 to 3.99 mo, 4 to 7.99 mo, and 8 to 11.99 mo. Using the bivariate normal distribution of resistance and reactance components standardized by the infant's length, the bivariate 95% confidence limits for the mean impedance vector separated by sex and age groups were calculated and plotted. Further, the bivariate 95%, 75%, and 50% tolerance intervals for individual vector measurements in the first year of life were plotted. Resistance and reactance values often fluctuated during the first year of life, particularly as raw measurements (without normalization by subject's length). However, 95% confidence ellipses of mean vectors from the three age groups overlapped each other, as did confidence ellipses by sex for each age class, indicating no significant vector migration during the first year of life. We obtained an estimate of mean impedance vector in a sample of healthy infants in the first year of life and calculated the bivariate values for an individual vector (95%, 75%, and 50

  4. Fluidic Vectoring of a Planar Incompressible Jet Flow

    Science.gov (United States)

    Mendez, Miguel Alfonso; Scelzo, Maria Teresa; Enache, Adriana; Buchlin, Jean-Marie

    2018-06-01

    This paper presents an experimental, a numerical and a theoretical analysis of the performances of a fluidic vectoring device for controlling the direction of a turbulent, bi-dimensional and low Mach number (incompressible) jet flow. The investigated design is the co-flow secondary injection with Coanda surface, which allows for vectoring angles up to 25° with no need of moving mechanical parts. A simple empirical model of the vectoring process is presented and validated via experimental and numerical data. The experiments consist of flow visualization and image processing for the automatic detection of the jet centerline; the numerical simulations are carried out solving the Unsteady Reynolds Average Navier- Stokes (URANS) closed with the k - ω SST turbulence model, using the PisoFoam solver from OpenFOAM. The experimental validation on three different geometrical configurations has shown that the model is capable of providing a fast and reliable evaluation of the device performance as a function of the operating conditions.

  5. Curvature of random walks and random polygons in confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2013-01-01

    The purpose of this paper is to study the curvature of equilateral random walks and polygons that are confined in a sphere. Curvature is one of several basic geometric properties that can be used to describe random walks and polygons. We show that confinement affects curvature quite strongly, and in the limit case where the confinement diameter equals the edge length the unconfined expected curvature value doubles from π/2 to π. To study curvature a simple model of an equilateral random walk in spherical confinement in dimensions 2 and 3 is introduced. For this simple model we derive explicit integral expressions for the expected value of the total curvature in both dimensions. These expressions are functions that depend only on the radius R of the confinement sphere. We then show that the values obtained by numeric integration of these expressions agrees with numerical average curvature estimates obtained from simulations of random walks. Finally, we compare the confinement effect on curvature of random walks with random polygons. (paper)

  6. From micro-correlations to macro-correlations

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2016-01-01

    Random vectors with a symmetric correlation structure share a common value of pair-wise correlation between their different components. The symmetric correlation structure appears in a multitude of settings, e.g. mixture models. In a mixture model the components of the random vector are drawn independently from a general probability distribution that is determined by an underlying parameter, and the parameter itself is randomized. In this paper we study the overall correlation of high-dimensional random vectors with a symmetric correlation structure. Considering such a random vector, and terming its pair-wise correlation “micro-correlation”, we use an asymptotic analysis to derive the random vector’s “macro-correlation” : a score that takes values in the unit interval, and that quantifies the random vector’s overall correlation. The method of obtaining macro-correlations from micro-correlations is then applied to a diverse collection of frameworks that demonstrate the method’s wide applicability.

  7. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  8. Random scalar fields and hyperuniformity

    Science.gov (United States)

    Ma, Zheng; Torquato, Salvatore

    2017-06-01

    Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.

  9. Vector independent transmission of the vector-borne bluetongue virus.

    Science.gov (United States)

    van der Sluijs, Mirjam Tineke Willemijn; de Smit, Abraham J; Moormann, Rob J M

    2016-01-01

    Bluetongue is an economically important disease of ruminants. The causative agent, Bluetongue virus (BTV), is mainly transmitted by insect vectors. This review focuses on vector-free BTV transmission, and its epizootic and economic consequences. Vector-free transmission can either be vertical, from dam to fetus, or horizontal via direct contract. For several BTV-serotypes, vertical (transplacental) transmission has been described, resulting in severe congenital malformations. Transplacental transmission had been mainly associated with live vaccine strains. Yet, the European BTV-8 strain demonstrated a high incidence of transplacental transmission in natural circumstances. The relevance of transplacental transmission for the epizootiology is considered limited, especially in enzootic areas. However, transplacental transmission can have a substantial economic impact due to the loss of progeny. Inactivated vaccines have demonstrated to prevent transplacental transmission. Vector-free horizontal transmission has also been demonstrated. Since direct horizontal transmission requires close contact of animals, it is considered only relevant for within-farm spreading of BTV. The genetic determinants which enable vector-free transmission are present in virus strains circulating in the field. More research into the genetic changes which enable vector-free transmission is essential to better evaluate the risks associated with outbreaks of new BTV serotypes and to design more appropriate control measures.

  10. Integrated vector management for malaria control

    Directory of Open Access Journals (Sweden)

    Impoinvil Daniel E

    2008-12-01

    Full Text Available Abstract Integrated vector management (IVM is defined as "a rational decision-making process for the optimal use of resources for vector control" and includes five key elements: 1 evidence-based decision-making, 2 integrated approaches 3, collaboration within the health sector and with other sectors, 4 advocacy, social mobilization, and legislation, and 5 capacity-building. In 2004, the WHO adopted IVM globally for the control of all vector-borne diseases. Important recent progress has been made in developing and promoting IVM for national malaria control programmes in Africa at a time when successful malaria control programmes are scaling-up with insecticide-treated nets (ITN and/or indoor residual spraying (IRS coverage. While interventions using only ITNs and/or IRS successfully reduce transmission intensity and the burden of malaria in many situations, it is not clear if these interventions alone will achieve those critical low levels that result in malaria elimination. Despite the successful employment of comprehensive integrated malaria control programmes, further strengthening of vector control components through IVM is relevant, especially during the "end-game" where control is successful and further efforts are required to go from low transmission situations to sustained local and country-wide malaria elimination. To meet this need and to ensure sustainability of control efforts, malaria control programmes should strengthen their capacity to use data for decision-making with respect to evaluation of current vector control programmes, employment of additional vector control tools in conjunction with ITN/IRS tactics, case-detection and treatment strategies, and determine how much and what types of vector control and interdisciplinary input are required to achieve malaria elimination. Similarly, on a global scale, there is a need for continued research to identify and evaluate new tools for vector control that can be integrated with

  11. Modulating ectopic gene expression levels by using retroviral vectors equipped with synthetic promoters.

    Science.gov (United States)

    Ferreira, Joshua P; Peacock, Ryan W S; Lawhorn, Ingrid E B; Wang, Clifford L

    2011-12-01

    The human cytomegalovirus and elongation factor 1α promoters are constitutive promoters commonly employed by mammalian expression vectors. These promoters generally produce high levels of expression in many types of cells and tissues. To generate a library of synthetic promoters capable of generating a range of low, intermediate, and high expression levels, the TATA and CAAT box elements of these promoters were mutated. Other promoter variants were also generated by random mutagenesis. Evaluation using plasmid vectors integrated at a single site in the genome revealed that these various synthetic promoters were capable of expression levels spanning a 40-fold range. Retroviral vectors were equipped with the synthetic promoters and evaluated for their ability to reproduce the graded expression demonstrated by plasmid integration. A vector with a self-inactivating long terminal repeat could neither reproduce the full range of expression levels nor produce stable expression. Using a second vector design, the different synthetic promoters enabled stable expression over a broad range of expression levels in different cell lines. The online version of this article (doi:10.1007/s11693-011-9089-0) contains supplementary material, which is available to authorized users.

  12. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  13. Asymptotic theory of weakly dependent random processes

    CERN Document Server

    Rio, Emmanuel

    2017-01-01

    Presenting tools to aid understanding of asymptotic theory and weakly dependent processes, this book is devoted to inequalities and limit theorems for sequences of random variables that are strongly mixing in the sense of Rosenblatt, or absolutely regular. The first chapter introduces covariance inequalities under strong mixing or absolute regularity. These covariance inequalities are applied in Chapters 2, 3 and 4 to moment inequalities, rates of convergence in the strong law, and central limit theorems. Chapter 5 concerns coupling. In Chapter 6 new deviation inequalities and new moment inequalities for partial sums via the coupling lemmas of Chapter 5 are derived and applied to the bounded law of the iterated logarithm. Chapters 7 and 8 deal with the theory of empirical processes under weak dependence. Lastly, Chapter 9 describes links between ergodicity, return times and rates of mixing in the case of irreducible Markov chains. Each chapter ends with a set of exercises. The book is an updated and extended ...

  14. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  15. Principal-vector-directed fringe-tracking technique.

    Science.gov (United States)

    Zhang, Zhihui; Guo, Hongwei

    2014-11-01

    Fringe tracking is one of the most straightforward techniques for analyzing a single fringe pattern. This work presents a principal-vector-directed fringe-tracking technique. It uses Gaussian derivatives for estimating fringe gradients and uses hysteresis thresholding for segmenting singular points, thus improving the principal component analysis method. Using it allows us to estimate the principal vectors of fringes from a pattern with high noise. The fringe-tracking procedure is directed by these principal vectors, so that erroneous results induced by noise and other error-inducing factors are avoided. At the same time, the singular point regions of the fringe pattern are identified automatically. Using them allows us to determine paths through which the "seed" point for each fringe skeleton is easy to find, thus alleviating the computational burden in processing the fringe pattern. The results of a numerical simulation and experiment demonstrate this method to be valid.

  16. On the Reduction of Vector and Axial-Vector Fields in a Meson Effective Action at O(p4)

    International Nuclear Information System (INIS)

    Bel'kov, A.A.; Lanev, A.V.; Schaale, A.

    1994-01-01

    Starting from an effective NJL-type quark interaction we have derived an effective meson action for the pseudoscalar sector. The vector and axial-vector degrees of freedom have been integrated out, applying the static equations of motion. As the results we have found a (reduced) pseudoscalar meson Lagrangian of the Gasser-Leutwyler type with modified structure coefficients L i . This method has been also used to construct the reduced weak and electromagnetic-weak currents. The application of the reduced Lagrangian and currents has been considered in physical processes. 36 refs., 1 fig., 1 tab

  17. Data requirements for valuing externalities: The role of existing permitting processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, A.D.; Baechler, M.C.; Callaway, J.M.

    1990-08-01

    While the assessment of externalities, or residual impacts, will place new demands on regulators, utilities, and developers, existing processes already require certain data and information that may fulfill some of the data needs for externality valuation. This paper examines existing siting, permitting, and other processes and highlights similarities and differences between their data requirements and the data required to value environmental externalities. It specifically considers existing requirements for siting new electricity resources in Oregon and compares them with the information and data needed to value externalities for such resources. This paper also presents several observations about how states can take advantage of data acquired through processes already in place as they move into an era when externalities are considered in utility decision-making. It presents other observations on the similarities and differences between the data requirements under existing processes and those for valuing externalities. This paper also briefly discusses the special case of cumulative impacts. And it presents recommendations on what steps to take in future efforts to value externalities. 35 refs., 2 tabs.

  18. Random-walk simulation of diffusion-controlled processes among static traps

    International Nuclear Information System (INIS)

    Lee, S.B.; Kim, I.C.; Miller, C.A.; Torquato, S.; Department of Mechanical and Aerospace Engineering and Department of Chemical Engineering, North Carolina State University, Raleigh, North Carolina 27695-7910)

    1989-01-01

    We present computer-simulation results for the trapping rate (rate constant) k associated with diffusion-controlled reactions among identical, static spherical traps distributed with an arbitrary degree of impenetrability using a Pearson random-walk algorithm. We specifically consider the penetrable-concentric-shell model in which each trap of diameter σ is composed of a mutually impenetrable core of diameter λσ, encompassed by a perfectly penetrable shell of thickness (1-λ)σ/2: λ=0 corresponding to randomly centered or ''fully penetrable'' traps and λ=1 corresponding to totally impenetrable traps. Trapping rates are calculated accurately from the random-walk algorithm at the extreme limits of λ (λ=0 and 1) and at an intermediate value (λ=0.8), for a wide range of trap densities. Our simulation procedure has a relatively fast execution time. It is found that k increases with increasing impenetrability at fixed trap concentration. These ''exact'' data are compared with previous theories for the trapping rate. Although a good approximate theory exists for the fully-penetrable-trap case, there are no currently available theories that can provide good estimates of the trapping rate for a moderate to high density of traps with nonzero hard cores (λ>0)

  19. Photoproduction of vector messons off nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Kossov, M.

    1994-04-01

    Vector mesons play an important role in photonuclear reactions because they carry the same quantum numbers as the incident photon. It has recently been suggested by G.E. Brown and M. Rho that the mass of vector mesons could decrease with increasing baryon density. This phenomenon would provide a physical observable for chiral symmetry ({xi}{sup S}) restoration at high baryon density, an essential non-perturbative phenomenon associated with the structure of quantum chromodynamics (QCD). According to the constituent quark model the difference between the mass of the valence quark m{sub v} and the mass of the current quark m{sub c} is expected to be proportional to the mean vacuum value of the quark condensate: m{sub v}-m{sub c} {proportional_to} ({psi}{psi}){sub v}. The mass difference appears because of chiral symmetry breaking {xi}{sup SB}. QCD sum rule calculations show that the value of this difference is about 300 MeV for all quarks. If the mean vacuum value differs from that for the hadron density in nuclei, then the constituent quark mass should be renormalized as follows: m{sub v}{sup l}=m{sub c} + ({psi}{psi})n/({psi}{psi})v {center_dot}300MeV, where the indices n correspond to nuclear matter and v to vacuum. The same conclusion was reached in a nuclear matter model based on quark degrees of freedom. Using the symmetry properties of QCD in an effective Lagrangian theory, Brown and Rho have found a scaling law for the vector meson masses at finite baryon density: M{sub N}{sup n}/M{sub N}{sup v}=M{sub V}{sup n}/M{sub V}{sup v}=f{sub {pi}}{sup n}/f{sub {pi}}{sup v}, where f{sub {pi}} is the {pi} {r_arrow}{mu}{nu} decay constant playing the role of an order parameter for the chiral symmetry restoration. At nuclear density the value of f{sub {pi}} was found to be 15-20% smaller than in vacuum. In contrast to the constituent quark model, it was found that M{sup n}/M=({sub n}/{sub v}){sup 1/3}.

  20. VectorBase

    Data.gov (United States)

    U.S. Department of Health & Human Services — VectorBase is a Bioinformatics Resource Center for invertebrate vectors. It is one of four Bioinformatics Resource Centers funded by NIAID to provide web-based...